top of page
Video Series
Videos on Artificial Intelligence produced by iQ Studios.



Surviving AI, Episode 10 Short: How to avoid extinction by AI
Designing AI systems with “humans in the loop” and that have democratic values are two essential principles are key to human survival


Surviving AI, Episode 8 Short: How do we make AI safer?
How will Artificial Intelligence determine what is right and what is wrong?


Surviving AI, Episode 5 Short: Can we lock AI up?
Could we lock up AI like we secure plutonium or other dangerous technology? Can we prevent it from falling into the hands of bad actors?


Surviving AI, Episode 1 Short: What is AI?
Learn about the origins of AI.


Surviving AI, Episode 26: Now is the time to make AI safe!
There is a relatively short window during which humans can influence the development of AGI and SuperIntelligence.


Surviving AI, Episode 25: Summary of AI Safety
A recap of the main points of the Surviving AI series.


Surviving AI, Episode 24: More detail on how to build safe AGI
An overview of multiple pending patents describing in detail how to build safe and ethical Artificial General Intelligence (AGI).


Surviving AI, Episode 23: What makes the AGI network safe?
Three ways are explained to increase the safety of AGI.


Surviving AI, Episode 22: How does the AGI network learn?
There is more than one way to teach an AI.


Surviving AI, Episode 21: Can we train Advanced Autonomous AIs (AAAIs) to be saintly?
iQ Company research shows that it is possible to change LLM behavior.


Surviving AI, Episode 20: How do Advanced Autonomous AIs (AAAIs) learn?
Dr. Kaplan describes the collective intelligence approach to Artificial General Intelligence.


Surviving AI, Episode 14: What is Collective Intelligence?
Comparison between passive and active AI intelligence and how collective intelligence means many minds are better than one.


Surviving AI, Episode 13: How Do We Build Safe AGI?
The fastest and safest path to AGI involves building a collective intelligence network of humans and AI agents.


Surviving AI, Episode 11: Should We Slow Down AI Development?
Thousands of AI researchers have called for a pause in the development of the most advanced AI systems. But is the best approach?


Surviving AI, Episode 10: How to Avoid Extinction by AI
Designing AI systems with “humans in the loop” with democratic values are 2 essential principles to increase AI safety & avoid extinction.


Surviving AI, Episode 8: How Do We Make AI Safer?
How will Artificial Intelligence determine what is right and what is wrong?


Surviving AI, Episode 7: How Do We Solve The Alignment Problem (post AGI)?
Aligning the values of AGI with positive human values is the key to ensuring that humans survive and prosper.


Surviving AI, Episode 6: The Alignment Problem
AI begins as a tool but it will not remain one; AI will learn to set its own goals. What if its goals don’t align with ours?


Surviving AI, Episode 5: Can We Lock AI Up?
Could we lock up AI like we secure plutonium or other dangerous technology? Can we prevent it from falling into the hands of bad actors?


Surviving AI, Episode 4: Can We Program AI to be Safe?
Dr. Kaplan discusses the problems with programming safety rules into AI.
bottom of page