The 2023 Future of Life Award
This year, the Future of Life Award honours five unsung individuals who used the power of storytelling to reduce the threat of nuclear war. Their work played a direct role in shaping policy and raising public awareness about the grave risks associated with nuclear warfare. Their achievements have helped avert catastrophe for humanity.View the award
The risks we focus on
We are currently concerned by three major risks. They all hinge on the development, use and governance of transformative technologies. We focus our efforts on guiding the impacts of these technologies.
From recommender algorithms to chatbots to self-driving cars, AI is changing our lives. As the impact of this technology grows, so will the risks.
From the accidental release of engineered pathogens to the backfiring of a gene-editing experiment, the dangers from biotechnology are too great for us to proceed blindly.
Almost eighty years after their introduction, the risks posed by nuclear weapons are as high as ever - and new research reveals that the impacts are even worse than previously reckoned.
UAV Kargu autonomous drones at the campus of OSTIM Technopark in Ankara, Turkey - June 2020.
How we are addressing these issues
There are many potential levers of change for steering the development and use of transformative technologies. We target a range of these levers to increase our chances of success.
We perform policy advocacy in the United States, the European Union, and the United Nations.Our Policy work
We produce educational materials aimed at informing public discourse, as well as encouraging people to get involved.Our Outreach work
We provide grants to individuals and organisations working on projects that further our mission.Our Grant Programs
We convene leaders of the relevant fields to discuss ways of ensuring the safe development and use of powerful technologies.Our Events
What we're working on
Read about some of our current featured projects:
On November 1-2, the United Kingdom will convene the first ever global government meeting focussed on AI Safety. In the run-up to the summit, FLI has produced and published a document outlining key recommendations.
Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.
Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.
Our new fictional film depicts a world where artificial intelligence ('AI') is integrated into nuclear command, control and communications systems ('NC3') with terrifying results.
Our involvement with the UN's work spans several years and initiatives, including the Roadmap for Digital Cooperation and the Global Digital Compact (GDC).
The Future of Life Institute accepted entries from teams across the globe, to compete for a prize purse of up to $100,000 by designing visions of a plausible, aspirational future that includes strong artificial intelligence.
Every year, the Future of Life Award is given to one or more unsung heroes who have made a significant contribution to preserving the future of life.
Avoiding nuclear war is in the national security interest of all nations. We pursue a range of initiatives to reduce this risk. Our current focus is on mitigating the emerging risk of AI integration into nuclear command, control and communication.
View all projects
Regular updates about the technologies shaping our world
Every month, we bring 41,000+ subscribers the latest news on how emerging technologies are transforming our world. It includes a summary of major developments in our cause areas, and key updates on the work we do. Subscribe to our newsletter to receive these highlights at the end of each month.
November 1, 2023
Reflections on the six-month anniversary of our open letter, our UK AI Safety Summit recommendations, and more.
October 1, 2023
Read previous editions
The most recent posts we have published:
This collaboration between the Future of Life Institute and Mithril Security explores hardware-backed AI governance tools for transparency, traceability, and confidentiality.
November 30, 2023
A last-ditch assault on the EU AI Act threatens to jeopardise one of the legislation's most important functions: preventing our most powerful AI models from causing widespread harm to society.
November 22, 2023
Our analysis shows that the recent non-paper drafted by Italy, France, and Germany largely fails to provide any provisions on foundation models or general purpose AI systems, and offers much less oversight and enforcement than the existing alternatives.
November 21, 2023
With the 2023 Future of Life Award, we celebrate two films – both released amidst the cold war – that were profoundly impactful in reducing the threat of nuclear war, and the five storytellers behind them.
November 13, 2023
View all posts
The most recent policy papers we have published:
FLI AI Liability Directive: Executive Summary
FLI AI Liability Directive: Full Version
Artificial Intelligence and Nuclear Weapons: Problem Analysis and US Policy Recommendations
FLI Governance Scorecard and Safety Standards Policy (SSP)
View all policy papers
Future of Life Institute Podcast
The most recent podcasts we have broadcast:
View all episodes