Skip to content

Introductory Resources on AI Risks

Why are people so worried about AI?
September 18, 2023
Will Jones
Image: A frame from our recent short film 'Artificial Escalation' which presents an AI disaster scenario.


This is a short list of resources that explain the major risks from AI, with a focus on the risk of human extinction. This is meant as an introduction and is by no means exhaustive.

The basics - How AI could kill us all

Deeper dives into the extinction risks

Academic papers

Videos and podcasts


  • The Alignment Problem by Brian Christian (2020)
  • Life 3.0 by Max Tegmark (2017)
  • Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell (2019)

Additional AI risk areas - Other than extinction

Our content

Related posts

If you enjoyed this, you also might like:

As Six-Month Pause Letter Expires, Experts Call for Regulation on Advanced AI Development

This week will mark six months since the open letter calling for a six month pause on giant AI experiments. Since then, a lot has happened. Our signatories reflect on what needs to happen next.
September 21, 2023

US Senate Hearing 'Oversight of AI: Principles for Regulation': Statement from the Future of Life Institute

We implore Congress to immediately regulate these systems before they cause irreparable damage, and provide five principles for effective oversight.
July 25, 2023

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram