Skip to content
Cause Area

Artificial Intelligence

From recommender algorithms to self-driving cars, AI is changing our lives. As the impact of this technology magnifies, so will its risks.

Artificial Intelligence, which encompasses everything from recommender algorithms to self-driving cars, is racing forward. Today we have 'narrow AI' systems which perform isolated tasks. These already pose major risks, such as the erosion of democratic processes, financial flash crashes, or an arms race from autonomous weapons.

Looking ahead, many researchers are pursuing 'AGI', general AI which can perform as well as or better than humans at a wide range of cognitive tasks. Once AI systems can themselves design smarter systems, we may hit an 'intelligence explosion', very quickly leaving humanity behind. This could eradicate poverty or war; it could also eradicate us.

That risk comes not from AI's potential malevolence or consciousness, but from its competence - in other words, not from how it feels, but what it does. Humans could, for instance, lose control of a high-performing system programmed to do something destructive, with devastating impact. And even if an AI is programmed to do something beneficial, it could still develop a destructive method to achieve that goal.

AI doesn't need consciousness to pursue its goals, any more than heat-seeking missiles do. Equally, the danger is not from robots, per se, but from intelligence itself, which requires nothing more than an internet connection to do us incalculable harm.

Misconceptions about this still loom large in public discourse. However, thanks to experts speaking out on these issues, and machine learning reaching certain milestones far earlier than expected, an informed interest in AI safety as a major concern has blossomed in recent years.

Super-intelligence is not necessarily inevitable, yet nor is it impossible. It might be right around the corner; it might never happen. But either way, civilisation only flourishes as long as we can win the race between the growing power of technology and the wisdom with which we design and manage it. With AI, the best way to win that race is not to impede the former, but to accelerate the latter by supporting AI safety research and risk governance.

Since it may take decades to complete this research, it is prudent to start now. AI safety research prepares us better for the future by pre-emptively making AI beneficial to society and reducing its risks.

Meanwhile, policy cannot possibly form and reform at the same pace as AI risks; it too must therefore be pre-emptive, inclusive of dangers both present and forthcoming.

benefits and risks of artificial intelligence

Recommended reading

Benefits & Risks of Artificial Intelligence

From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google's search algorithms to IBM's Watson to autonomous weapons.
Read article

Featured resources

Introductory Resources on AI Risks

Why are people so worried about AI?
September 18, 2023

Global AI Policy

How countries and organizations around the world are approaching the benefits and risks of AI Artificial intelligence (AI) holds great […]
December 16, 2022

AI Value Alignment Research Landscape

This landscape synthesizes a variety of AI safety research agendas along with other papers in AI, machine learning, ethics, governance, […]
November 16, 2018

Featured videos


Featured posts

Characterizing AI Policy using Natural Language Processing

As interest in Artificial Intelligence (AI) grows across the globe, governments have focused their attention on identifying the soft and […]
December 16, 2022

Superintelligence survey

Click here to see this page in other languages:  Chinese  French  German Japanese  Russian The Future of AI - What Do You Think? Max […]
August 15, 2017

A Principled AI Discussion in Asilomar

The Asilomar Conference took place against a backdrop of growing interest from wider society in the potential of artificial intelligence […]
January 18, 2017
A rebuttal to bad AI journalism

When AI Journalism Goes Bad

Slate is currently running a feature called “Future Tense,” which claims to be the “citizens guide to the future.” Two […]
April 26, 2016

Introductory Resources on AI Safety Research

Reading list to get up to speed on the main ideas in the field. The resources are selected for relevance and/or brevity, […]
February 29, 2016

Hawking Reddit AMA on AI

Our Scientific Advisory Board member Stephen Hawking's long-awaited Reddit AMA answers on Artificial Intelligence just came out, and was all over today's […]
October 12, 2015


Frequently Asked Questions about the Future of Artificial Intelligence Click here to see this page in other languages:  Chinese   German Japanese   Korean   […]
October 12, 2015

Open letter on AI weapons

At a press conference at the IJCAI AI-meeting in Buenos Aires today, Stuart Russell and Toby Walsh announced an open […]
July 29, 2015
Open Letters

Featured open letters


Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
March 22, 2023

Foresight in AI Regulation Open Letter

The emergence of artificial intelligence (AI) promises dramatic changes in our economic and social structures as well as everyday life […]
June 14, 2020

Autonomous Weapons Open Letter: Global Health Community

Given our commitment to do no harm, the global health community has a long history of successful advocacy against inhumane weapons, and the World and American Medical Associations have called for bans on nuclear, chemical and biological weapons. Now, recent advances in artificial intelligence have brought us to the brink of a new arms race in lethal autonomous weapons.
March 13, 2019

Lethal Autonomous Weapons Pledge

Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI. In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine.
June 6, 2018

Autonomous Weapons Open Letter: AI & Robotics Researchers

Autonomous weapons select and engage targets without human intervention. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.
February 9, 2016

Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
October 28, 2015
Cause areas

Other cause areas

Explore the other cause areas that we consider most pressing:

Nuclear Weapons

Almost eighty years after their introduction, the risks posed by nuclear weapons are as high as ever - and new research reveals that the impacts are even worse than previously reckoned.


From the accidental release of engineered pathogens to the backfiring of a gene-editing experiment, the dangers from biotechnology are too great for us to proceed blindly.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram