top of page
Video Series
Videos on Artificial Intelligence produced by iQ Studios.



Surviving AI, Episode 6 Short: What is the alignment problem?
AI begins as a tool but it will not remain one. AI will learn to set its own goals. What if its goals don’t align with ours? Could it mean t


Surviving AI, Episode 5 Short: Can we lock AI up?
Could we lock up AI like we secure plutonium or other dangerous technology? Can we prevent it from falling into the hands of bad actors?


Surviving AI, Episode 4 Short: Can we program AI to be safe?
The idea of programming safety rules into intelligent systems dates back at least to the science fiction author Isaac Asimov and his “laws o


Surviving AI, Episode 3 Short: Can we regulate AI?
Regulation is a standard answer for dangerous technologies, but there are problems with this approach. Dr. Kaplan discusses three major chal


Surviving AI, Episode 2 Short: How dangerous is AI?
Dr. Kaplan discusses the existential threat posed by AI with clips from Geoffrey Hinton (former Google Fellow), Jerry Kaplan (Lecturer at St


Surviving AI, Episode 1 Short: What is AI?
Learn about the origins of AI.

How to Create AGI and Not Die: IFoRE / Sigma Xi Conference Presentation 2023
A presentation by Dr. Craig A. Kaplan at the IFoRE / Sigma Xi Conference on 11/10/23.


Surviving AI, Episode 26: Now is the time to make AI safe!
There is a relatively short window during which humans can influence the development of AGI and SuperIntelligence.


Surviving AI, Episode 25: Summary of AI Safety
A recap of the main points of the Surviving AI series.


Surviving AI, Episode 24: More detail on how to build safe AGI
An overview of multiple pending patents describing in detail how to build safe and ethical Artificial General Intelligence (AGI).


Surviving AI, Episode 23: What makes the AGI network safe?
Three ways are explained to increase the safety of AGI.


Surviving AI, Episode 22: How does the AGI network learn?
There is more than one way to teach an AI.


Surviving AI, Episode 21: Can we train Advanced Autonomous AIs (AAAIs) to be saintly?
iQ Company research shows that it is possible to change LLM behavior.


Surviving AI, Episode 20: How do Advanced Autonomous AIs (AAAIs) learn?
Dr. Kaplan describes the collective intelligence approach to Artificial General Intelligence.


Surviving AI, Episode 19: What are customized AI agents (AAAIs)?
You and I can customize Advanced Autonomous Artificial Intelligences (AAAIs) by teaching them both our knowledge and values.


Surviving AI, Episode 18: What is a Problem Solving (AI) Framework?
An explanation of a universal problem solving framework and why the one invented by Dr. Herbert Simon may be the best.


Surviving AI, Episode 17: What is a human CI (AI) network?
Existing human collective intelligence networks can be stitched together to form the fabric of safe AGI.


Surviving AI, Episode 16: How to build an AGI Network
The best way to build AGI begins with building a human collective intelligence network and then adding AI agents to that.


Surviving AI, Episode 15: Does Collective Intelligence Work?
Dr. Kaplan explains how Active Collective Intelligence systems have successfully tackled the most challenging problems.


Surviving AI, Episode 14: What is Collective Intelligence?
Comparison between passive and active AI intelligence and how collective intelligence means many minds are better than one.


Surviving AI, Episode 13: How Do We Build Safe AGI?
The fastest and safest path to AGI involves building a collective intelligence network of humans and AI agents.


Surviving AI, Episode 12: What is the Fastest and Safest Path to AGI?
Dr. Kaplan argues that if we know how to build AGI safely, we should actually speed up development instead of slowing it downl


Surviving AI, Episode 11: Should We Slow Down AI Development?
Thousands of AI researchers have called for a pause in the development of the most advanced AI systems. But is the best approach?


Surviving AI, Episode 10: How to Avoid Extinction by AI
Designing AI systems with “humans in the loop” with democratic values are 2 essential principles to increase AI safety & avoid extinction.
bottom of page