Skip to content
Cause Area

Artificial Intelligence

From recommender algorithms to chatbots to self-driving cars, AI is changing our lives. As the impact of this technology grows, so will the risks.

Artificial Intelligence is racing forward. Companies are increasingly creating general-purpose AI systems that can perform many different tasks. Large language models (LLMs) can compose poetry, create dinner recipes and write computer code. Some of these models already pose major risks, such as the erosion of democratic processes, rampant bias and misinformation, and an arms race in autonomous weapons. But there is worse to come.

AI systems will only get more capable. Corporations are actively pursuing ‘artificial general intelligence’ (AGI), which can perform as well as or better than humans at a wide range of tasks. These companies promise this will bring unprecedented benefits, from curing cancer to ending global poverty. On the flip side, more than half of AI experts believe there is a one in ten chance this technology will cause our extinction.

This belief has nothing to do with the evil robots or sentient machines seen in science fiction. In the short term, advanced AI can enable those seeking to do harm – bioterrorists, for instance – by easily executing complex processing tasks without conscience.

In the longer term, we should not fixate on one particular method of harm, because the risk comes from greater intelligence itself. Consider how humans overpower less intelligent animals without relying on a particular weapon, or an AI chess program defeats human players without relying on a specific move.

Militaries could lose control of a high-performing system designed to do harm, with devastating impact. An advanced AI system tasked with maximising company profits could employ drastic, unpredictable methods. Even an AI programmed to do something altruistic could pursue a destructive method to achieve that goal. We currently have no good way of knowing how AI systems will act, because no one, not even their creators, understands how they work.

AI safety has now become a mainstream concern. Experts and the wider public are united in their alarm at emerging risks and the pressing need to manage them. But concern alone will not be enough. We need policies to help ensure that AI development improves lives everywhere – rather than merely boosts corporate profits. And we need proper governance, including robust regulation and capable institutions that can steer this transformative technology away from extreme risks and towards the benefit of humanity.

benefits and risks of artificial intelligence

Recommended reading

Benefits & Risks of Artificial Intelligence

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google's search algorithms to IBM's Watson to autonomous weapons.
Read article
Our content

Featured content on Artificial Intelligence

Posts

Disrupting the Deepfake Pipeline in Europe

Leveraging corporate criminal liability under the Violence Against Women Directive to safeguard against pornographic deepfake exploitation.
February 22, 2024

Realising Aspirational Futures – New FLI Grants Opportunities

Our Futures Program, launched in 2023, aims to guide humanity towards the beneficial outcomes made possible by transformative technologies. This year, as […]
February 14, 2024

Gradual AI Disempowerment

Could an AI takeover happen gradually?
February 1, 2024

Exploration of secure hardware solutions for safe AI deployment

This collaboration between the Future of Life Institute and Mithril Security explores hardware-backed AI governance tools for transparency, traceability, and confidentiality.
November 30, 2023

Protect the EU AI Act

A last-ditch assault on the EU AI Act threatens to jeopardise one of the legislation's most important functions: preventing our most powerful AI models from causing widespread harm to society.
November 22, 2023

Miles Apart: Comparing key AI Act proposals

Our analysis shows that the recent non-paper drafted by Italy, France, and Germany largely fails to provide any provisions on foundation models or general purpose AI systems, and offers much less oversight and enforcement than the existing alternatives.
November 21, 2023

Can we rely on information sharing?

We have examined the Terms of Use of major General-Purpose AI system developers and found that they fail to provide assurances about the quality, reliability, and accuracy of their products or services.
October 26, 2023

Written Statement of Dr. Max Tegmark to the AI Insight Forum

The Future of Life Institute President addresses the AI Insight Forum on AI innovation and provides five US policy recommendations.
October 24, 2023

As Six-Month Pause Letter Expires, Experts Call for Regulation on Advanced AI Development

This week will mark six months since the open letter calling for a six month pause on giant AI experiments. Since then, a lot has happened. Our signatories reflect on what needs to happen next.
September 21, 2023

Characterizing AI Policy using Natural Language Processing

As interest in Artificial Intelligence (AI) grows across the globe, governments have focused their attention on identifying the soft and […]
December 16, 2022

Superintelligence survey

Click here to see this page in other languages:  Chinese  French  German Japanese  Russian The Future of AI – What Do You Think? Max […]
August 15, 2017

A Principled AI Discussion in Asilomar

The Asilomar Conference took place against a backdrop of growing interest from wider society in the potential of artificial intelligence […]
January 18, 2017

Introductory Resources on AI Safety Research

Reading list to get up to speed on the main ideas in the field. The resources are selected for relevance and/or brevity, […]
February 29, 2016

AI FAQ

Frequently Asked Questions about the Future of Artificial Intelligence Click here to see this page in other languages:  Chinese   German Japanese   Korean   […]
October 12, 2015

Resources

Catastrophic AI Scenarios

Concrete examples of how AI could go wrong
February 1, 2024

Introductory Resources on AI Risks

Why are people so worried about AI?
September 18, 2023

Global AI Policy

How countries and organizations around the world are approaching the benefits and risks of AI Artificial intelligence (AI) holds great […]
December 16, 2022

AI Value Alignment Research Landscape

This landscape synthesizes a variety of AI safety research agendas along with other papers in AI, machine learning, ethics, governance, […]
November 16, 2018

Policy papers

Competition in Generative AI: Future of Life Institute’s Feedback to the European Commission’s Consultation

March 2024

European Commission Manifesto

March 2024

Chemical & Biological Weapons and Artificial Intelligence: Problem Analysis and US Policy Recommendations

February 2024

FLI Response to OMB: Request for Comments on AI Governance, Innovation, and Risk Management

February 2024

FLI Response to NIST: Request for Information on NIST’s Assignments under the AI Executive Order

February 2024

FLI Response to Bureau of Industry and Security (BIS): Request for Comments on Implementation of Additional Export Controls

February 2024

Response to CISA Request for Information on Secure by Design AI Software

February 2024

Artificial Intelligence and Nuclear Weapons: Problem Analysis and US Policy Recommendations

November 2023

FLI Governance Scorecard and Safety Standards Policy (SSP)

October 2023

Cybersecurity and AI: Problem Analysis and US Policy Recommendations

October 2023

FLI recommendations for the UK Global AI Safety Summit

September 2023

Open letters

Signatories
2672

Open letter calling on world leaders to show long-view leadership on existential threats

The Elders, Future of Life Institute and a diverse range of co-signatories call on decision-makers to urgently address the ongoing impact and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI.
February 14, 2024
Signatories
31810

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
March 22, 2023
Signatories
Closed

Foresight in AI Regulation Open Letter

The emergence of artificial intelligence (AI) promises dramatic changes in our economic and social structures as well as everyday life […]
June 14, 2020
Signatories
276

Autonomous Weapons Open Letter: Global Health Community

Given our commitment to do no harm, the global health community has a long history of successful advocacy against inhumane weapons, and the World and American Medical Associations have called for bans on nuclear, chemical and biological weapons. Now, recent advances in artificial intelligence have brought us to the brink of a new arms race in lethal autonomous weapons.
March 13, 2019
Signatories
5218

Lethal Autonomous Weapons Pledge

Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI. In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine.
June 6, 2018
Signatories
34378

Autonomous Weapons Open Letter: AI & Robotics Researchers

Autonomous weapons select and engage targets without human intervention. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.
February 9, 2016
Signatories
11251

Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
October 28, 2015
Cause areas

Other cause areas

Explore the other cause areas that we consider most pressing:

Nuclear Weapons

Almost eighty years after their introduction, the risks posed by nuclear weapons are as high as ever - and new research reveals that the impacts are even worse than previously reckoned.

Biotechnology

From the accidental release of engineered pathogens to the backfiring of a gene-editing experiment, the dangers from biotechnology are too great for us to proceed blindly.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram