Our mission
Steering transformative technology towards benefitting life and away from extreme large-scale risks.
We believe that the way powerful technology is developed and used will be the most important factor in determining the prospects for the future of life. This is why we have made it our mission to ensure that technology continues to improve those prospects.
Recent updates
Happening now
The winners of Superintelligence Imagined, our $70,000 creative contest on the risks of superintelligence, have been announced, including Grand Prize winner "Writing Doom" and a range of other videos, short stories, games, and multi-media pieces! Take a look at some very impressive efforts to educate audiences about highly advanced AI systems.
Cause areas
The risks we focus on
We are currently concerned by three major risks. They all hinge on the development, use and governance of transformative technologies. We focus our efforts on guiding the impacts of these technologies.
Artificial Intelligence
From recommender algorithms to chatbots to self-driving cars, AI is changing our lives. As the impact of this technology grows, so will the risks.
Artificial Intelligence
Biotechnology
From the accidental release of engineered pathogens to the backfiring of a gene-editing experiment, the dangers from biotechnology are too great for us to proceed blindly.
Biotechnology
Nuclear Weapons
Almost eighty years after their introduction, the risks posed by nuclear weapons are as high as ever - and new research reveals that the impacts are even worse than previously reckoned.
Nuclear Weapons
UAV Kargu autonomous drones at the campus of OSTIM Technopark in Ankara, Turkey - June 2020.
Our work
How we are addressing these issues
There are many potential levers of change for steering the development and use of transformative technologies. We target a range of these levers to increase our chances of success.
Policy
We perform policy advocacy in the United States, the European Union, and the United Nations.
Our Policy workOutreach
We produce educational materials aimed at informing public discourse, as well as encouraging people to get involved.
Our Outreach workGrantmaking
We provide grants to individuals and organisations working on projects that further our mission.
Our Grant ProgramsEvents
We convene leaders of the relevant fields to discuss ways of ensuring the safe development and use of powerful technologies.
Our EventsFeatured Projects
What we're working on
Read about some of our current featured projects:
Combatting Deepfakes
2024 is rapidly turning into the Year of Fake. As part of a growing coalition of concerned organizations, FLI is calling on lawmakers to take meaningful steps to disrupt the AI-driven deepfake supply chain.
Superintelligence Imagined Creative Contest
A contest for the best creative educational materials on superintelligence, its associated risks, and the implications of this technology for our world. 5 prizes at $10,000 each.
Perspectives of Traditional Religions on Positive AI Futures
Most of the global population participates in a traditional religion. Yet the perspectives of these religions are largely absent from strategic AI discussions. This initiative aims to support religious groups to voice their faith-specific concerns and hopes for a world with AI, and work with them to resist the harms and realise the benefits.
The Elders Letter on Existential Threats
The Elders, the Future of Life Institute and a diverse range of preeminent public figures are calling on world leaders to urgently address the ongoing harms and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI.
Realising Aspirational Futures – New FLI Grants Opportunities
We are opening two new funding opportunities to support research into the ways that artificial intelligence can be harnessed safely to make the world a better place.
AI Convergence: Risks at the Intersection of AI and Nuclear, Biological and Cyber Threats
The dual-use nature of AI systems can amplify the dual-use nature of other technologies—this is known as AI convergence. We provide policy expertise to policymakers in the United States in three key convergence areas: biological, nuclear, and cyber.
Strengthening the European AI Act
Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.
Educating about Lethal Autonomous Weapons
Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.
Global AI governance at the UN
Our involvement with the UN's work spans several years and initiatives, including the Roadmap for Digital Cooperation and the Global Digital Compact (GDC).
Worldbuilding Competition
The Future of Life Institute accepted entries from teams across the globe, to compete for a prize purse of up to $100,000 by designing visions of a plausible, aspirational future that includes strong artificial intelligence.
Future of Life Award
Every year, the Future of Life Award is given to one or more unsung heroes who have made a significant contribution to preserving the future of life.
View all projects
newsletter
Regular updates about the technologies shaping our world
Every month, we bring 41,000+ subscribers the latest news on how emerging technologies are transforming our world. It includes a summary of major developments in our cause areas, and key updates on the work we do. Subscribe to our newsletter to receive these highlights at the end of each month.
Future of Life Institute Newsletter: Illustrating Superintelligence
Need a break from US election news? Explore the results of our $70K creative contest; new national security AI guidance from the White House; polling teens on AI; and much more.
Maggie Munro
1 November, 2024
Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech
A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
Maggie Munro
1 October, 2024
Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk
Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
Maggie Munro
30 August, 2024
Read previous editions
Our content
Latest posts
The most recent posts we have published:
Max Tegmark: AGI Manhattan Project Proposal is Scientific Fraud
A new report for Congress recommends that the US start a "Manhattan Project" to build Artificial General Intelligence. To do so would be a suicide race.
20 November, 2024
FLI Statement on White House National Security Memorandum
Last week the White House released a National Security Memorandum concerning AI governance and risk management. The NSM issues guidance […]
28 October, 2024
Paris AI Safety Breakfast #3: Yoshua Bengio
The third of our 'AI Safety Breakfasts' event series, featuring Yoshua Bengio on the evolution of AI capabilities, loss-of-control scenarios, and proactive vs reactive defense.
16 October, 2024
Paris AI Safety Breakfast #2: Dr. Charlotte Stix
The second of our 'AI Safety Breakfasts' event series, featuring Dr. Charlotte Stix on model evaluations, deceptive AI behaviour, and the AI Safety and Action Summits.
14 October, 2024
View all posts
Papers
The most recent policy and research papers we have published:
Feedback on the Scientific Panel of Independent Experts Implementing Regulation
November 2024
US AI Safety Institute codification (FAIIA vs. AIARA)
November 2024
Input on Federal AI Reporting Requirements
October 2024
Implementing the Senate AI Roadmap
June 2024
View all papers
Future of Life Institute Podcast
The most recent podcasts we have broadcast:
25 October, 2024
Andrea Miotti on a Narrow Path to Safe, Transformative AI
Play
13 September, 2024
Tom Barnes on How to Build a Resilient World
Play
View all episodes