All Podcast Episodes
Darren McKee on Uncontrollable Superintelligence
1 December, 2023
Video
![](https://futureoflife.org/wp-content/uploads/2024/01/FLI-Podcast-thumbnail-Darren-McKee.jpg)
Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment.
Timestamps:
00:00 Uncontrollable superintelligence
16:41 AI goals and the "virus analogy"
28:36 Speed of AI cognition
39:25 Narrow AI and autonomy
52:23 Reliability of current and future AI
1:02:33 Planning for multiple AI scenarios
1:18:57 Will AIs seek self-preservation?
1:27:57 Is there a unified solution to AI alignment?
1:30:26 Concrete AI safety proposals
Podcast
Related episodes
If you enjoyed this episode, you might also like:
![](https://futureoflife.org/wp-content/uploads/2025/01/12-17-davidad-1290x720-R2-V3A-1024x576.jpg)
9 January, 2025
David Dalrymple on Safeguarded, Transformative AI
Play
![](https://futureoflife.org/wp-content/uploads/2024/12/12-02-Nathan-Labenz-1290x720-R1-V1A-1024x576.jpg)
5 December, 2024
Nathan Labenz on the State of AI and Progress since GPT-4
Play
All episodes