
Our mission
Steering transformative technology towards benefitting life and away from extreme large-scale risks.
We believe that the way powerful technology is developed and used will be the most important factor in determining the prospects for the future of life. This is why we have made it our mission to ensure that technology continues to improve those prospects.
Our mission
Ensuring that our technology remains beneficial for life
Our mission is to steer transformative technologies away from extreme, large-scale risks and towards benefiting life.
Read more
How certain technologies are developed and used has far-reaching consequences for all life on earth.
If properly managed, these technologies could change the world in a way that makes life substantially better, both for those alive today and those who will one day be born. They could be used to treat and eradicate diseases, strengthen democratic processes, mitigate - or even halt - climate change and restore biodiversity.
If irresponsibly managed, they could inflict serious harms on humanity and other animal species. In the most extreme cases, they could bring about the fragmentation or collapse of societies, and even push us to the brink of extinction.
The Future of Life Institute works to reduce the likelihood of these worst-case outcomes, and to help ensure that transformative technologies are used to the benefit of life.
Our missionIf properly managed, these technologies could change the world in a way that makes life substantially better, both for those alive today and those who will one day be born. They could be used to treat and eradicate diseases, strengthen democratic processes, mitigate - or even halt - climate change and restore biodiversity.
If irresponsibly managed, they could inflict serious harms on humanity and other animal species. In the most extreme cases, they could bring about the fragmentation or collapse of societies, and even push us to the brink of extinction.
The Future of Life Institute works to reduce the likelihood of these worst-case outcomes, and to help ensure that transformative technologies are used to the benefit of life.

Cause areas
The risks we focus on
We are currently concerned by four major risks. All four hinge on the development, use and governance of transformative technologies. We focus our efforts on guiding the impacts of these technologies.

Artificial Intelligence
From recommender algorithms to self-driving cars, AI is changing our lives. As the impact of this technology magnifies, so will its risks.
Artificial Intelligence

Biotechnology
From the accidental release of engineered pathogens to the backfiring of a gene-editing experiment, the dangers from biotechnology are too great for us to proceed blindly.
Biotechnology

Nuclear Weapons
Almost eighty years after their introduction, the risks posed by nuclear weapons are as high as ever - and new research reveals that the impacts are even worse than previously reckoned.
Nuclear Weapons

Climate Change
Likely the most well-known of our cause areas, climate change increases the likelihood of other catastrophic risks, such as pandemics or war, as well as posing many catastrophic threats on its own.
Climate Change
Our work
How we are addressing these issues
There are many potential levers of change for steering the development and use of transformative technologies. We target a range of these levers to increase our chances of success.
Policy
We perform policy advocacy in the United States, the European Union, and the United Nations.
Our Policy workOutreach
We produce educational materials aimed at informing public discourse, as well as encouraging people to get involved.
Our Outreach workGrantmaking
We provide grants to individuals and organisations working on projects that further our mission.
Our Grant ProgramsEvents
We convene leaders of the relevant fields to discuss ways of ensuring the safe development and use of powerful technologies.
Our Events
UAV Kargu autonomous drones at the campus of OSTIM Technopark in Ankara, Turkey. June 2020
Projects
What we're working on
Read about some of our ongoing projects:

Mitigating the Risks of AI Integration in Nuclear Launch
Avoiding nuclear war is in the national security interest of all nations. We pursue a range of initiatives to reduce this risk. Our current focus is on mitigating the emerging risk of AI integration into nuclear command, control and communication.

Future of Life Award
Every year, the Future of Life Award is given to one or more unsung heroes who have made a significant contribution to preserving the future of life.

Educating about Lethal Autonomous Weapons
Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.

Strengthening the European AI Act
Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.

Strengthening the NIST AI Risk Management Framework
Our feedback on the first draft of the National Institute of Standards and Technology’s (NIST) AI risk management framework addressed extreme and unacceptable risks, loyalty of AI systems and the risk management of general purpose systems.

Future of Life Institute Podcast
A podcast dedicated to hosting conversations with some of the world's leading thinkers and doers in the field of emerging technology and risk reduction. 140+ episodes since 2015, 4.8/5 stars on Apple Podcasts.

Worldbuilding Competition
The Future of Life Institute accepted entries from teams across the globe, to compete for a prize purse of up to $100,000 by designing visions of a plausible, aspirational future that includes strong artificial intelligence.
External Input
Meet our External Advisors
We consult a range of scientific experts and communicators across our cause areas to ensure our approach is well-informed, strategic and adaptable.

Alan Alda

Alan Guth

Christof Koch

Elon Musk

Erik Brynjolfsson

Francesca Rossi

Frank Wilczek

George Church

Martin Rees

Morgan Freeman

Sandra Faber

Saul Perlmutter

Stuart Russell
Our newsletter
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Read previous editions

Future of Life Institute February 2023 Newsletter: Progress on Autonomous Weapons!
Welcome to the Future of Life Institute newsletter. Every month, we bring 24,000+ subscribers the latest news on how emerging […]
March 1, 2023

FLI November 2022 Newsletter: AI Liability Directive
Welcome to the Future of Life Institute Newsletter. Every month, we bring 24,000+ subscribers the latest news on how emerging technologies are transforming our […]
December 9, 2022
Media mentions
Our content
Latest posts
Here is the most recent content we have published:

FAQs about FLI’s Open Letter Calling for a Pause on Giant AI Experiments
Disclaimer: Please note that these FAQ’s have been prepared by FLI and do not necessarily reflect the views of the […]
March 31, 2023

Future of Life Institute Newsletter: Pause Giant AI Experiments!
Welcome to the Future of Life Institute newsletter. Every month, we bring 25,000+ subscribers the latest news on how emerging […]
March 31, 2023

Pause Giant AI Experiments: An Open Letter
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
March 22, 2023

AI policy resources (archived)
Summaries of AI Policy Resources The resources below include papers, reports, and articles relevant to AI policy debates. They are […]
March 21, 2023