The risks we focus on
We are currently concerned by three major risks. They all hinge on the development, use and governance of transformative technologies. We focus our efforts on guiding the impacts of these technologies.
From recommender algorithms to self-driving cars, AI is changing our lives. As the impact of this technology magnifies, so will its risks.
From the accidental release of engineered pathogens to the backfiring of a gene-editing experiment, the dangers from biotechnology are too great for us to proceed blindly.
Almost eighty years after their introduction, the risks posed by nuclear weapons are as high as ever - and new research reveals that the impacts are even worse than previously reckoned.
UAV Kargu autonomous drones at the campus of OSTIM Technopark in Ankara, Turkey - June 2020.
How we are addressing these issues
There are many potential levers of change for steering the development and use of transformative technologies. We target a range of these levers to increase our chances of success.
We perform policy advocacy in the United States, the European Union, and the United Nations.Our Policy work
We produce educational materials aimed at informing public discourse, as well as encouraging people to get involved.Our Outreach work
We provide grants to individuals and organisations working on projects that further our mission.Our Grant Programs
We convene leaders of the relevant fields to discuss ways of ensuring the safe development and use of powerful technologies.Our Events
What we're working on
Read about some of our current featured projects:
Our new fictional film depicts a world where artificial intelligence ('AI') is integrated into nuclear command, control and communications systems ('NC3') with terrifying results.
Our involvement with the UN's work spans several years and initiatives, including the Roadmap for Digital Cooperation and the Global Digital Compact (GDC).
The Future of Life Institute accepted entries from teams across the globe, to compete for a prize purse of up to $100,000 by designing visions of a plausible, aspirational future that includes strong artificial intelligence.
Every year, the Future of Life Award is given to one or more unsung heroes who have made a significant contribution to preserving the future of life.
Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.
Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.
Avoiding nuclear war is in the national security interest of all nations. We pursue a range of initiatives to reduce this risk. Our current focus is on mitigating the emerging risk of AI integration into nuclear command, control and communication.
A podcast dedicated to hosting conversations with some of the world's leading thinkers and doers in the field of emerging technology and risk reduction. 140+ episodes since 2015, 4.8/5 stars on Apple Podcasts.
View all projects
Regular updates about the Future of Life Institute, in your inbox
Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.
Including our latest video on AI + nukes, and FLI cause areas on the big screen.
August 2, 2023
Welcome to the Future of Life Institute newsletter. Every month, we bring 28,000+ subscribers the latest news on how emerging […]
July 5, 2023
Welcome to the Future of Life Institute newsletter. Every month, we bring 27,000+ subscribers the latest news on how emerging […]
May 31, 2023
Read previous editions
Here is the most recent content we have published:
This week will mark six months since the open letter calling for a six month pause on giant AI experiments. Since then, a lot has happened. Our signatories reflect on what needs to happen next.
September 21, 2023
US Senate Hearing 'Oversight of AI: Principles for Regulation': Statement from the Future of Life Institute
We implore Congress to immediately regulate these systems before they cause irreparable damage, and provide five principles for effective oversight.
July 25, 2023
Future of Life Institute Podcast
Here are the most recent podcasts we have published:
September 26, 2023