Skip to content
Focus Area

Artificial Intelligence

From recommender algorithms to chatbots to self-driving cars, AI is changing our lives. As the impact of this technology grows, so will the risks.
Spotlight

We must not build AI to replace humans.

A new essay by Anthony Aguirre, Executive Director of the Future of Life Institute

Humanity is on the brink of developing artificial general intelligence that exceeds our own. It's time to close the gates on AGI and superintelligence... before we lose control of our future.

View the site

Artificial Intelligence is racing forward. Companies are increasingly creating general-purpose AI systems that can perform many different tasks. Large language models (LLMs) can compose poetry, create dinner recipes and write computer code. Some of these models already pose major risks, such as the erosion of democratic processes, rampant bias and misinformation, and an arms race in autonomous weapons. But there is worse to come.

AI systems will only get more capable. Corporations are actively pursuing ‘artificial general intelligence’ (AGI), which can perform as well as or better than humans at a wide range of tasks. These companies promise this will bring unprecedented benefits, from curing cancer to ending global poverty. On the flip side, more than half of AI experts believe there is a one in ten chance this technology will cause our extinction.

This belief has nothing to do with the evil robots or sentient machines seen in science fiction. In the short term, advanced AI can enable those seeking to do harm – bioterrorists, for instance – by easily executing complex processing tasks without conscience.

In the longer term, we should not fixate on one particular method of harm, because the risk comes from greater intelligence itself. Consider how humans overpower less intelligent animals without relying on a particular weapon, or an AI chess program defeats human players without relying on a specific move.

Militaries could lose control of a high-performing system designed to do harm, with devastating impact. An advanced AI system tasked with maximising company profits could employ drastic, unpredictable methods. Even an AI programmed to do something altruistic could pursue a destructive method to achieve that goal. We currently have no good way of knowing how AI systems will act, because no one, not even their creators, understands how they work.

AI safety has now become a mainstream concern. Experts and the wider public are united in their alarm at emerging risks and the pressing need to manage them. But concern alone will not be enough. We need policies to help ensure that AI development improves lives everywhere – rather than merely boosts corporate profits. And we need proper governance, including robust regulation and capable institutions that can steer this transformative technology away from extreme risks and towards the benefit of humanity.

Focus areas

Other focus areas

Explore the other focus areas that we consider most pressing:

Nuclear Weapons

Almost eighty years after their introduction, the risks posed by nuclear weapons are as high as ever - and new research reveals that the impacts are even worse than previously reckoned.

Biotechnology

From the accidental release of engineered pathogens to the backfiring of a gene-editing experiment, the dangers from biotechnology are too great for us to proceed blindly.
Our content

Recent content on Artificial Intelligence

Posts

Are we close to an intelligence explosion?

AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.
21 March, 2025

The Impact of AI in Education: Navigating the Imminent Future

What must be considered to build a safe but effective future for AI in education, and for children to be safe online?
13 February, 2025

A Buddhist Perspective on AI: Cultivating freedom of attention and true diversity in an AI future

The AI-facilitated intelligence revolution is claimed by some to be setting humanity on a glidepath into utopian futures of nearly effortless satisfaction and frictionless choice. We should beware.
20 January, 2025

Could we switch off a dangerous AI?

New research validates age-old concerns about the difficulty of constraining powerful AI systems.
27 December, 2024

AI Safety Index Released

The Future of Life Institute has released its first safety scorecard of leading AI companies, finding many are not addressing safety concerns while some have taken small initial steps in the right direction.
11 December, 2024

Why You Should Care About AI Agents

Powerful AI agents are about to hit the market. Here we explore the implications.
4 December, 2024

Max Tegmark on AGI Manhattan Project

A new report for Congress recommends that the US start a "Manhattan Project" to build Artificial General Intelligence. To do so would be a suicide race.
20 November, 2024

FLI Statement on White House National Security Memorandum

Last week the White House released a National Security Memorandum concerning AI governance and risk management. The NSM issues guidance […]
28 October, 2024

Resources

US Federal Agencies: Mapping AI Activities

This guide outlines AI activities across the US Executive Branch, focusing on regulatory authorities, budgets, and programs.
9 September, 2024

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
1 February, 2024

Introductory Resources on AI Risks

Why are people so worried about AI?
18 September, 2023
pile of printing papers

AI Policy Resources

The resources that describe and respond to the policy challenges generated by AI are always in flux. This page is here to help you stay up to date by listing some of the best resources currently available.
21 March, 2023

Policy papers

Staffer’s Guide to AI Policy: Congressional Committees and Relevant Legislation

March 2025

Recommendations for the U.S. AI Action Plan

March 2025

Safety Standards Delivering Controllable and Beneficial AI Tools

February 2025

FLI AI Safety Index 2024

December 2024

Open letters

Signatories
2672

Open letter calling on world leaders to show long-view leadership on existential threats

The Elders, Future of Life Institute and a diverse range of co-signatories call on decision-makers to urgently address the ongoing impact and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI.
14 February, 2024
Signatories
Closed

AI Licensing for a Better Future: On Addressing Both Present Harms and Emerging Threats

This joint open letter by Encode Justice and the Future of Life Institute calls for the implementation of three concrete US policies in order to address current and future harms of AI.
25 October, 2023
Signatories
31810

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
22 March, 2023
Signatories
Closed

Foresight in AI Regulation Open Letter

The emergence of artificial intelligence (AI) promises dramatic changes in our economic and social structures as well as everyday life […]
14 June, 2020

Future of Life Awards

Future Of Life Award 2024

Three leading researchers and scholars were honored by the Future of Life Institute for laying the foundation of modern ethics and safety considerations for artificial intelligence and computers.
Winners
James H. Moor, Batya Friedman, Steve Omohundro
Topic
For laying the foundations of modern computer ethics and AI safety

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram