An overview of the type of work we do, and all of our current and past projects.
Our areas of work
We work on projects across a few distinct areas:
We perform policy advocacy in the United States, European Union, and United Nations.
Our policy work
We work on projects which aim to guide humanity towards the beneficial outcomes made possible by transformative technologies.
Our futures work
We produce educational materials aimed at informing public discourse, as well as encouraging people to get involved.
Our outreach work
We provide grants to individuals and organisations working on projects that further our mission.
Our grant programs
We convene leaders of the relevant fields to discuss ways of ensuring the safe development and use of powerful technologies.
Our most important contributions
Here are a few of our proudest achievements:
Hosted the first AI Safety conferences
We were the first to convene leading figures in the field of AI to discuss our concerns about potential safety risks of the emerging technology.
View our events
Created the first AI Safety grant program
From 2015-2017, we ran the first ever grant program dedicated to funding AI Safety projects. We currently offer a range of grant opportunities for projects that forward our mission.
View our grants
Developed the AI Asilomar Principles
In 2017, FLI coordinated the development of the Asilomar AI Principles, one of the earliest and most influential sets of AI governance principles.
View the principles
Celebrated 18 unsung heroes with Future of Life Awards
Every year since 2017, the Future of Life Award has celebrated the contributions of people who helped preserve the prospects of life.
See the award
Produced viral video series raising the alarm on lethal autonomous weapons
We produced two short films, with a combined 75+ million views, depicting a world in which lethal autonomous weapons have been allowed to proliferate.
Watch the videos
AI recommendation in the UN digital cooperation roadmap
Our recommendations on the global governance of AI technologies were adopted in the UN Secretary-General's digital cooperation roadmap.
View the roadmap
What we're working on
Here is an overview of all the projects we are working on right now:
On November 1-2, the United Kingdom will convene the first ever global government meeting focussed on AI Safety. In the run-up to the summit, FLI has produced and published a document outlining key recommendations.
Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.
Can you imagine a world in 2045 where we manage to avoid the climate crisis, major wars, and the potential harms of artificial intelligence? Our new podcast series explores ways we could build a more positive future, and offers thought provoking ideas for how we might get there.
Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.
Our new fictional film depicts a world where artificial intelligence ('AI') is integrated into nuclear command, control and communications systems ('NC3') with terrifying results.
Our US policy team advises policymakers in Congress and Statehouses on how to ensure that AI systems are safe and beneficial.
Our involvement with the UN's work spans several years and initiatives, including the Roadmap for Digital Cooperation and the Global Digital Compact (GDC).
The Future of Life Institute accepted entries from teams across the globe, to compete for a prize purse of up to $100,000 by designing visions of a plausible, aspirational future that includes strong artificial intelligence.
Every year, the Future of Life Award is given to one or more unsung heroes who have made a significant contribution to preserving the future of life.
Avoiding nuclear war is in the national security interest of all nations. We pursue a range of initiatives to reduce this risk. Our current focus is on mitigating the emerging risk of AI integration into nuclear command, control and communication.
A podcast dedicated to hosting conversations with some of the world's leading thinkers and doers in the field of emerging technology and risk reduction. 140+ episodes since 2015, 4.8/5 stars on Apple Podcasts.
Were you looking for something else?
Here are a couple of other pages you might have been looking for: