Introductory Resources on AI Risks
Why are people so worried about AI?
September 18, 2023
Will Jones
Contents
This is a short list of resources that explain the major risks from AI, with a focus on the risk of human extinction. This is meant as an introduction and is by no means exhaustive.
The basics – How AI could kill us all
- AI experts are increasingly afraid of what they’re creating by Kelsey Piper at Vox (2022) — A very accessible introduction to why AI “might kill us all”.
- The ‘Don’t Look Up’ Thinking That Could Doom Us With AI by Max Tegmark for TIME (2023) — An easy-to-read response to common objections, using references to the 2021 film.
Deeper dives into the extinction risks
- FAQ on Catastrophic AI Risk by Yoshua Bengio (2023) — One of the “godfathers of AI” addresses AI risks in a Q&A format.
- Most Important Century Series by Holden Karnofsky (2022) — Karnofsky argues the far future will look radically unfamiliar, and may be determined by the AI we develop this century. Highlights include AI Could Defeat All Of Us Combined and Why Would AI “Aim” To Defeat Humanity?
- The Need For Work On Technical AI Alignment by Daniel Eth (2023) — A semi-technical explanation of “the alignment problem”, how it could be catastrophic for humanity, and how we can solve it.
Academic papers
- Joseph Carlsmith (2022) — Is Power-Seeking AI an Existential Risk?
- Richard Ngo, Lawrence Chan, Sören Mindermann (2022) — The alignment problem from a deep learning perspective
- Karina Vold and Daniel R. Harris (2021) — How Does Artificial Intelligence Pose an Existential Risk?
- Benjamin S. Bucknall and Shiri Dori-Hacohen (2022) — Current and Near-Term AI as a Potential Existential Risk Factor
- Chan et al. (2023) — Harms from Increasingly Agentic Algorithmic Systems
- Acemoglu and Lensman (2023) — Regulating Transformative Technologies
- Dan Hendrycks, Mantas Mazeika and Thomas Woodside (2023) — An Overview of Catastrophic AI Risks
Videos and podcasts
- Why would AI want to do bad things? — Robert Miles (2018)
- How do we prevent the AIs from killing us? — Paul Cristiano on Bankless (2023)
- Pausing the AI Revolution? — Jaan Tallinn on The Cognitive Revolution (2023)
- The Case for Halting AI Development — Max Tegmark on the Lex Friedman Podcast (2023)
- Don’t Look Up – The Documentary: The Case For AI As An Existential Threat — DaganOnAI (2023)
Books
- The Alignment Problem by Brian Christian (2020)
- Life 3.0 by Max Tegmark (2017)
- Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell (2019)
- Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World by Darren McKee (2023)
Additional AI risk areas – Other than extinction
- AI Now Institute research areas — resources on present AI harms, including accountability, climate, labour, privacy, biometric risks, large-scale AI models and more.
- Algorithmic Justice League education page — articles on AI issues like facial recognition, racial discrimination and social justice.
- Stepping back from the brink: Why multilateral regulation of autonomy in weapons systems is difficult, yet imperative and feasible — Frank Sauer for the IRRC (2020)
- The Risks of Autonomous Weapons Systems for Crisis Stability and Conflict Escalation in Future U.S.-Russia Confrontations — Burgess Laird for The RAND Blog (2020)
- ICRC Position Paper on Autonomous Weapons Systems — The International Committee of the Red Cross (2021)
This content was first published at futureoflife.org on September 18, 2023.
About the Future of Life Institute
The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.
Our content
Related content
Other posts about AI, Collection, Existential Risk
If you enjoyed this content, you also might also be interested in:
25 October, 2024
13 September, 2024
Our content