Skip to content
Our mission

Steering transformative technology towards benefitting life and away from extreme large-scale risks.

We believe that the way powerful technology is developed and used will be the most important factor in determining the prospects for the future of life. This is why we have made it our mission to ensure that technology continues to improve those prospects.

Our mission

Ensuring that our technology remains beneficial for life

Our mission is to steer transformative technologies away from extreme, large-scale risks and towards benefiting life.
Read more
How certain technologies are developed and used has far-reaching consequences for all life on earth.

If properly managed, these technologies could change the world in a way that makes life substantially better, both for those alive today and those who will one day be born. They could be used to treat and eradicate diseases, strengthen democratic processes, mitigate - or even halt - climate change and restore biodiversity.

If irresponsibly managed, they could inflict serious harms on humanity and other animal species. In the most extreme cases, they could bring about the fragmentation or collapse of societies, and even push us to the brink of extinction.

The Future of Life Institute works to reduce the likelihood of these worst-case outcomes, and to help ensure that transformative technologies are used to the benefit of life.
Our mission

Cause areas

The risks we focus on

We are currently concerned by three major risks. They all hinge on the development, use and governance of transformative technologies. We focus our efforts on guiding the impacts of these technologies.

Artificial Intelligence

From recommender algorithms to self-driving cars, AI is changing our lives. As the impact of this technology magnifies, so will its risks.
Artificial Intelligence

Biotechnology

From the accidental release of engineered pathogens to the backfiring of a gene-editing experiment, the dangers from biotechnology are too great for us to proceed blindly.
Biotechnology

Nuclear Weapons

Almost eighty years after their introduction, the risks posed by nuclear weapons are as high as ever - and new research reveals that the impacts are even worse than previously reckoned.
Nuclear Weapons
UAV Kargu autonomous drones at the campus of OSTIM Technopark in Ankara, Turkey - June 2020.

Our work

How we are addressing these issues

There are many potential levers of change for steering the development and use of transformative technologies. We target a range of these levers to increase our chances of success.

Policy

We perform policy advocacy in the United States, the European Union, and the United Nations.
Our Policy work

Outreach

We produce educational materials aimed at informing public discourse, as well as encouraging people to get involved.
Our Outreach work

Grantmaking

We provide grants to individuals and organisations working on projects that further our mission.
Our Grant Programs

Events

We convene leaders of the relevant fields to discuss ways of ensuring the safe development and use of powerful technologies.
Our Events

Featured Projects

What we're working on

Read about some of our current featured projects:

Strengthening the European AI Act

Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.

Educating about Lethal Autonomous Weapons

Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.

Artificial Escalation

Our new fictional film depicts a world where artificial intelligence ('AI') is integrated into nuclear command, control and communications systems ('NC3') with terrifying results.

Global AI governance at the UN

Our involvement with the UN's work spans several years and initiatives, including the Roadmap for Digital Cooperation and the Global Digital Compact (GDC).

Worldbuilding Competition

The Future of Life Institute accepted entries from teams across the globe, to compete for a prize purse of up to $100,000 by designing visions of a plausible, aspirational future that includes strong artificial intelligence.

Future of Life Award

Every year, the Future of Life Award is given to one or more unsung heroes who have made a significant contribution to preserving the future of life.

Mitigating the Risks of AI Integration in Nuclear Launch

Avoiding nuclear war is in the national security interest of all nations. We pursue a range of initiatives to reduce this risk. Our current focus is on mitigating the emerging risk of AI integration into nuclear command, control and communication.

Future of Life Institute Podcast

A podcast dedicated to hosting conversations with some of the world's leading thinkers and doers in the field of emerging technology and risk reduction. 140+ episodes since 2015, 4.8/5 stars on Apple Podcasts.
View all projects

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Recent editions:

Future of Life Institute Newsletter: 'Imagine A World' is out today!

Envisioning Positive Futures With AI, + Urgent Calls For AI Regulation Continue.
September 5, 2023

Future of Life Institute Newsletter: Hollywood Talks AI

Including our latest video on AI + nukes, and FLI cause areas on the big screen.
August 2, 2023

Future of Life Institute Newsletter: Our Most Realistic Nuclear War Simulation Yet

Welcome to the Future of Life Institute newsletter. Every month, we bring 28,000+ subscribers the latest news on how emerging […]
July 5, 2023
Read previous editions
Our content

Latest posts

Here is the most recent content we have published:

As Six-Month Pause Letter Expires, Experts Call for Regulation on Advanced AI Development

This week will mark six months since the open letter calling for a six month pause on giant AI experiments. Since then, a lot has happened. Our signatories reflect on what needs to happen next.
September 21, 2023

Introductory Resources on AI Risks

Why are people so worried about AI?
September 18, 2023

US Senate Hearing 'Oversight of AI: Principles for Regulation': Statement from the Future of Life Institute

We implore Congress to immediately regulate these systems before they cause irreparable damage, and provide five principles for effective oversight.
July 25, 2023

FLI on "A Statement on AI Risk" and Next Steps

The view that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such […]
May 30, 2023

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram