An overview of the type of work we do, and all of our current and past projects.
Our areas of work
We work on projects across a few distinct areas:
We perform policy advocacy in the United States, European Union, and United Nations.
Our policy work
We produce educational materials aimed at informing public discourse, as well as encouraging people to get involved.
Our outreach work
We provide grants to individuals and organisations working on projects that further our mission.
Our grant programs
We convene leaders of the relevant fields to discuss ways of ensuring the safe development and use of powerful technologies.
Our most important contributions
Here are a few of our proudest achievements:
Hosted the first AI Safety conferences
We were the first to convene leading figures in the field of AI to discuss our concerns about potential safety risks of the emerging technology.
View our events
Created the first AI Safety grant program
From 2015-2017, we ran the first ever grant program dedicated to funding AI Safety projects. We currently offer a range of grant opportunities for projects that forward our mission.
View our grants
Developed the AI Asilomar Principles
In 2017, FLI coordinated the development of the Asilomar AI Principles, one of the earliest and most influential sets of AI governance principles.
View the principles
Celebrated 18 unsung heroes with Future of Life Awards
Every year since 2017, the Future of Life Award has celebrated the contributions of people who helped preserve the prospects of life.
See the award
Produced viral video series raising the alarm on lethal autonomous weapons
We produced two short films, with a combined 75+ million views, depicting a world in which lethal autonomous weapons have been allowed to proliferate.
Watch the videos
AI recommendation in the UN digital cooperation roadmap
Our recommendations on the global governance of AI technologies were adopted in the UN Secretary-General's digital cooperation roadmap.
View the roadmap
What we're working on
Here is an overview of all the projects we are working on right now:
Avoiding nuclear war is in the national security interest of all nations. We pursue a range of initiatives to reduce this risk. Our current focus is on mitigating the emerging risk of AI integration into nuclear command, control and communication.
Every year, the Future of Life Award is given to one or more unsung heroes who have made a significant contribution to preserving the future of life.
Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.
Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.
Our feedback on the first draft of the National Institute of Standards and Technology’s (NIST) AI risk management framework addressed extreme and unacceptable risks, loyalty of AI systems and the risk management of general purpose systems.
A podcast dedicated to hosting conversations with some of the world's leading thinkers and doers in the field of emerging technology and risk reduction. 140+ episodes since 2015, 4.8/5 stars on Apple Podcasts.
The Future of Life Institute accepted entries from teams across the globe, to compete for a prize purse of up to $100,000 by designing visions of a plausible, aspirational future that includes strong artificial intelligence.
Were you looking for something else?
Here are a couple of other pages you might have been looking for: