Skip to content

Introductory Resources on AI Risks

Why are people so worried about AI?
Published:
September 18, 2023
Author:
Will Jones
A frame from our recent short film 'Artificial Escalation' which presents an AI disaster scenario.

Contents

This is a short list of resources that explain the major risks from AI, with a focus on the risk of human extinction. This is meant as an introduction and is by no means exhaustive.

The basics – How AI could kill us all

Deeper dives into the extinction risks

Academic papers

Videos and podcasts

Books

  • The Alignment Problem by Brian Christian (2020)
  • Life 3.0 by Max Tegmark (2017)
  • Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell (2019)
  • Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World by Darren McKee (2023)

Additional AI risk areas – Other than extinction

This content was first published at futureoflife.org on September 18, 2023.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about , ,

If you enjoyed this content, you also might also be interested in:

The Pause Letter: One year later

It has been one year since our 'Pause AI' open letter sparked a global debate on whether we should temporarily halt giant AI experiments.
March 22, 2024
Our content

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram