Skip to content
All Podcast Episodes

Andrew Critch on AI Research Considerations for Human Existential Safety

Published
16 September, 2020

In this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community views AI existential risk, to the need for more accurate terminology in the field of AI existential safety and the risks of what Andrew calls prepotent AI systems. Crucially, we also discuss what Andrew sees as being the most likely source of existential risk: the possibility of externalities from multiple AIs and AI stakeholders competing in a context where alignment and AI existential safety issues are not naturally covered by industry incentives.

 Topics discussed in this episode include:

  • The mainstream computer science view of AI existential risk
  • Distinguishing AI safety from AI existential safety 
  • The need for more precise terminology in the field of AI existential safety and alignment
  • The concept of prepotent AI systems and the problem of delegation 
  • Which alignment problems get solved by commercial incentives and which don’t
  • The threat of diffusion of responsibility on AI existential safety considerations not covered by commercial incentives
  • Prepotent AI risk types that lead to unsurvivability for humanity 

 

Timestamps: 

0:00 Intro
2:53 Why Andrew wrote ARCHES and what it’s about
6:46 The perspective of the mainstream CS community on AI existential risk
13:03 ARCHES in relation to AI existential risk literature
16:05 The distinction between safety and existential safety
24:27 Existential risk is most likely to obtain through externalities
29:03 The relationship between existential safety and safety for current systems
33:17 Research areas that may not be solved by natural commercial incentives
51:40 What’s an AI system and an AI technology?
53:42 Prepotent AI
59:41 Misaligned prepotent AI technology
01:05:13 Human frailty
01:07:37 The importance of delegation
01:14:11 Single-single, single-multi, multi-single, and multi-multi
01:15:26 Control, instruction, and comprehension
01:20:40 The multiplicity thesis
01:22:16 Risk types from prepotent AI that lead to human unsurvivability
01:34:06 Flow-through effects
01:41:00 Multi-stakeholder objectives
01:49:08 Final words from Andrew

 

Citations:

AI Research Considerations for Human Existential Safety

 

We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, StitcheriHeartRadio, or your preferred podcast site/application. You can find all the AI Alignment Podcasts here.

Transcript

View transcript

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
View previous editions
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram