Introductory Resources on AI Risks
Why are people so worried about AI?
September 18, 2023
Will Jones

Contents
This is a short list of resources that explain the major risks from AI, with a focus on the risk of human extinction. This is meant as an introduction and is by no means exhaustive.
The basics - How AI could kill us all
- AI experts are increasingly afraid of what they’re creating by Kelsey Piper at Vox (2022) — A very accessible introduction to why AI “might kill us all”.
- The 'Don't Look Up' Thinking That Could Doom Us With AI by Max Tegmark for TIME (2023) — An easy-to-read response to common objections, using references to the 2021 film.
Deeper dives into the extinction risks
- FAQ on Catastrophic AI Risk by Yoshua Bengio (2023) — One of the “godfathers of AI” addresses AI risks in a Q&A format.
- Most Important Century Series by Holden Karnofsky (2022) — Karnofsky argues the far future will look radically unfamiliar, and may be determined by the AI we develop this century. Highlights include AI Could Defeat All Of Us Combined and Why Would AI “Aim” To Defeat Humanity?
- The Need For Work On Technical AI Alignment by Daniel Eth (2023) — A semi-technical explanation of “the alignment problem”, how it could be catastrophic for humanity, and how we can solve it.
Academic papers
- Joseph Carlsmith (2022) — Is Power-Seeking AI an Existential Risk?
- Richard Ngo, Lawrence Chan, Sören Mindermann (2022) — The alignment problem from a deep learning perspective
- Karina Vold and Daniel R. Harris (2021) — How Does Artificial Intelligence Pose an Existential Risk?
- Benjamin S. Bucknall and Shiri Dori-Hacohen (2022) — Current and Near-Term AI as a Potential Existential Risk Factor
- Chan et al. (2023) — Harms from Increasingly Agentic Algorithmic Systems
- Acemoglu and Lensman (2023) — Regulating Transformative Technologies
- Dan Hendrycks, Mantas Mazeika and Thomas Woodside (2023) — An Overview of Catastrophic AI Risks
Videos and podcasts
- Why would AI want to do bad things? — Robert Miles (2018)
- How do we prevent the AIs from killing us? — Paul Cristiano on Bankless (2023)
- Pausing the AI Revolution? — Jaan Tallinn on The Cognitive Revolution (2023)
- The Case for Halting AI Development — Max Tegmark on the Lex Friedman Podcast (2023)
- Don't Look Up - The Documentary: The Case For AI As An Existential Threat — DaganOnAI (2023)
Books
- The Alignment Problem by Brian Christian (2020)
- Life 3.0 by Max Tegmark (2017)
- Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell (2019)
Additional AI risk areas - Other than extinction
- AI Now Institute research areas — resources on present AI harms, including accountability, climate, labour, privacy, biometric risks, large-scale AI models and more.
- Algorithmic Justice League education page — articles on AI issues like facial recognition, racial discrimination and social justice.
- Stepping back from the brink: Why multilateral regulation of autonomy in weapons systems is difficult, yet imperative and feasible — Frank Sauer for the IRRC (2020)
- The Risks of Autonomous Weapons Systems for Crisis Stability and Conflict Escalation in Future U.S.-Russia Confrontations — Burgess Laird for The RAND Blog (2020)
- ICRC Position Paper on Autonomous Weapons Systems — The International Committee of the Red Cross (2021)
Our content
Related posts
If you enjoyed this, you also might like:

As Six-Month Pause Letter Expires, Experts Call for Regulation on Advanced AI Development
This week will mark six months since the open letter calling for a six month pause on giant AI experiments. Since then, a lot has happened. Our signatories reflect on what needs to happen next.
September 21, 2023

US Senate Hearing 'Oversight of AI: Principles for Regulation': Statement from the Future of Life Institute
We implore Congress to immediately regulate these systems before they cause irreparable damage, and provide five principles for effective oversight.
July 25, 2023