top of page
Video Series
Videos on Artificial Intelligence produced by iQ Studios.



Surviving AI, Episode 26: Now is the time to make AI safe!
There is a relatively short window during which humans can influence the development of AGI and SuperIntelligence.


Surviving AI, Episode 25: Summary of AI Safety
A recap of the main points of the Surviving AI series.


Surviving AI, Episode 24: More detail on how to build safe AGI
An overview of multiple pending patents describing in detail how to build safe and ethical Artificial General Intelligence (AGI).


Surviving AI, Episode 23: What makes the AGI network safe?
Three ways are explained to increase the safety of AGI.


Surviving AI, Episode 22: How does the AGI network learn?
There is more than one way to teach an AI.


Surviving AI, Episode 21: Can we train Advanced Autonomous AIs (AAAIs) to be saintly?
iQ Company research shows that it is possible to change LLM behavior.


Surviving AI, Episode 16: How to build an AGI Network
The best way to build AGI begins with building a human collective intelligence network and then adding AI agents to that.


Surviving AI, Episode 12: What is the Fastest and Safest Path to AGI?
Dr. Kaplan argues that if we know how to build AGI safely, we should actually speed up development instead of slowing it downl


Surviving AI, Episode 10: How to Avoid Extinction by AI
Designing AI systems with “humans in the loop” with democratic values are 2 essential principles to increase AI safety & avoid extinction.


Surviving AI, Episode 9: Can We Increase the Odds of Human Survival with AI?
Nearly half of AI experts say there's a 10% or greater chance of extinction by AI. Imagine if we could improve our survival odds by 1%?


Surviving AI, Episode 7: How Do We Solve The Alignment Problem (post AGI)?
Aligning the values of AGI with positive human values is the key to ensuring that humans survive and prosper.


Surviving AI, Episode 6: The Alignment Problem
AI begins as a tool but it will not remain one; AI will learn to set its own goals. What if its goals don’t align with ours?


Surviving AI, Episode 5: Can We Lock AI Up?
Could we lock up AI like we secure plutonium or other dangerous technology? Can we prevent it from falling into the hands of bad actors?


Surviving AI, Episode 4: Can We Program AI to be Safe?
Dr. Kaplan discusses the problems with programming safety rules into AI.


Surviving AI, Episode 3: Can We Regulate AI?
Regulation is a standard answer for dangerous technologies, but there are problems with this approach with AI.


Surviving AI, Episode 1: What is AI?
Demis Hassabis, CEO, DeepMind is the thumbnail for "Unraveling the Mystery of AI" where Dr. Kaplan explains the origins of AI.


Episode 2: Human Intelligence
To understand Planetary Intelligence, its helpful to first understand human intelligence.
bottom of page