Skip to content

Make AI Safe

Why we need common-sense AI regulation

Big Tech is racing to release massively powerful AIs—with no meaningful guardrails or oversight.

Despite admitting the risks, they aggressively lobby against regulation.

Leaders should step in, and make them be safe. Do you agree?

Find out how you can help

We will send occasional updates on what you can do to support AI regulation, as well as our monthly newsletter.
AI is poised to remake the world. We can decide how.
The past few years have seen a revolution in AI. We now have systems that can do many of the things humans can, developed and deployed by some of the world's largest corporations. It's amazing.

We're seeing a rapid development in AI capabilities

But it is also extremely dangerous.

Thousands of experts, including leaders of those same AI companies, have sounded the alarm about the massive risks—from bioweapons, to infrastructure attacks, to mass unemployment, even extinction.
Open Letter
Future of Life Institute

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
March 2023
Signatories
33707
View open letter
Open Letter
Center for AI Safety

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

May 2023
Signatories
CEOs of OpenAI, Google Deepmind, Anthropic, and hundreds of AI experts and public figures.
View open letter

AI-driven harms are already destroying lives and livelihoods.

An explosion of AI-generated deepfakes are disrupting elections and abusing women's likenesses for pornography. AI is being weaponised for financial fraud and cyberattacks. Autonomous weapons are already in use on the battlefield today, using algorithms to decide who lives and who dies.

The autonomous weapons of science fiction are here.

The era in which algorithms decide who lives and who dies is upon us. We must act now to prohibit and regulate these weapons.

View the site
Yet corporations are steaming full-speed ahead.

The leading AI developers are locked in a furious race to develop and deploy the most powerful systems as fast as possible.

Intense competition is already forcing them to cut corners. We need laws to keep us safe.

Policymakers face a barrage of industry lobbyists.

Some efforts at regulation have begun, but are slowed by political inertia and opposed by furious lobbying from industry.

We urgently need lawmakers to step in.

We can build the solutions to make sure AI preserves individual freedom, and doesn't concentrate enormous power within a handful of companies or individuals.

"It's time for bold, new thinking on existential threats"

Global experts present urgent solutions for world leaders.
In partnership with the Future of Life Institute, The Elders present a series of expert-led short films to inspire bold, new thinking from world leaders on the greatest challenges facing humanity.
View the film series

Confronting Tech Power

Report • AI Now Institute • Quote from coverage by MIT Tech Review
"If regulators don’t act now, the generative AI boom will concentrate Big Tech’s power even further. To understand why, consider that the current AI boom depends on two things: large amounts of data, and enough computing power to process it."

How to mitigate AI-driven power concentration

We're offering up to $4M for projects that work to mitigate the dangers of AI-driven power concentration and move towards a better world of meaningful human agency.
Status:
Open for submissions
Grant program • Future of Life Institute
Deadline: 31 October 2024

We Need An FDA For Artificial Intelligence

“How the FDA came into being and evolved can provide us with crucial lessons on mitigating technology’s risks while maximizing its benefits.”
Noema • 16 July 2024
We know how to do this safely.

Proposals already exist for how AI can be governed effectively. These proposals are based on existing models for the governance of powerful technologies of the past.

By creating enforceable safety standards, lawmakers can protect people and sustain innovation for decades to come. By empowering robust institutions, they can deliver the oversight necessary to keep Big Tech and government in line.

Our Position on AI

Advanced AI should be developed safely, or not at all. AI should be developed for all people and to solve real human problems.

We oppose developing AI that poses large-scale risks to humanity, including via power concentration, and favor AI built to solve real human problems. We believe frontier AI is currently being developed in an unsafe and unaccountable manner.
See our Position on AI

We present a framework for regulating AI with safety standards.

Evaluating proposals for AI governance and providing a regulatory framework for robust safety standards, measures and oversight.
October 2023
We can reap the rewards of AI technology while minimizing the risks.

With the wrong rules—or no rules at all—advanced AI is likely to concentrate power within a handful of corporations or governments and endanger everyone else.

By carefully developing safe and controllable AI systems to solve specific problems and enable people to do what they could not do before, we can reap huge benefits.

Find out how you can help

We will send occasional updates on what you can do to support AI regulation, as well as our monthly newsletter.

We must take control of AI before it controls us

The ongoing, unchecked, out-of-control race to develop increasingly powerful AI systems puts humanity at risk. These threats are potentially catastrophic, including rampant unemployment, bioterrorism, widespread disinformation, nuclear war, and many more.

We urgently need lawmakers to step in and ensure a safety-first approach with proper oversight, standards and enforcement. These are not only critical to protecting human lives and wellbeing, they are essential to safeguarding innovation and ensuring that everyone can access the incredible potential benefits of AI going forward. We must not let a handful of tech corporations jeopardise humanity’s shared future.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram