All Podcast Episodes
Liron Shapira on Superintelligence Goals
19 April, 2024
Video
![](https://futureoflife.org/wp-content/uploads/2024/05/Podcast-thumbnails-Liron-Shapira-1024x576.jpg)
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI.
Timestamps:
00:00 Intelligence as optimization-power
05:18 Will LLMs imitate human values?
07:15 Why would AI develop dangerous goals?
09:55 Goal-completeness
12:53 Alignment to which values?
22:12 Is AI just another technology?
31:20 What is FOOM?
38:59 Risks from centralized power
49:18 Can AI defend us against AI?
56:28 An Apollo program for AI safety
01:04:49 Do we only have one chance?
01:07:34 Are we living in a crucial time?
01:16:52 Would superintelligence be fragile?
01:21:42 Would human-inspired AI be safe?
Podcast
Related episodes
If you enjoyed this episode, you might also like:
![](https://futureoflife.org/wp-content/uploads/2024/05/Podcast-thumbnails-Daniel-Faggella-35-1024x576.jpg)
3 May, 2024
Dan Faggella on the Race to AGI
Play
![](https://futureoflife.org/wp-content/uploads/2024/03/Podcast-thumbnails-Katja-Grace-1024x576.jpg)
14 March, 2024
Katja Grace on the Largest Survey of AI Researchers
Play
All episodes