Grantmaking work
Supporting vital cutting-edge work with a wise, future-oriented mindset.
Introduction
Financial support for promising work aligned with our mission.
Crises like COVID-19 show us that our civilisation is fragile, and needs to plan ahead better. FLI’s grants are for those who take this fragility seriously, who wish to study the risks from ever more powerful technologies and develop strategies for reducing them. The goal is to win the wisdom race: the race between the growing power of our technology and the wisdom with which we manage it.
Recently, we announced a $25M multi-year grant program aimed at tipping the balance away from extinction, towards flourishing. This is made possible by the generosity of cryptocurrency pioneer Vitalik Buterin and the Shiba Inu community. The program is designed to offer a range of grant opportunities within the areas of AI Existential Safety, Policy and Behavioural Science.
Recently, we announced a $25M multi-year grant program aimed at tipping the balance away from extinction, towards flourishing. This is made possible by the generosity of cryptocurrency pioneer Vitalik Buterin and the Shiba Inu community. The program is designed to offer a range of grant opportunities within the areas of AI Existential Safety, Policy and Behavioural Science.




















Grant programs
All our grant programs
Open programs

Nuclear War Research
Finalists submitting materials

PhD Fellowships
Funds allocated

Postdoctoral Fellowships
Funds allocated
Completed programs

2018 AGI Safety Grant Program
Completed

2015 AI Safety Grant Program
Completed
AI Safety Community
A community dedicated to ensuring AI is developed safely.
The way to ensure a better, safer future with AI is not to impede the development of this new technology, but to accelerate our wisdom in handling it, by supporting AI safety research.
Since it may take decades to complete this research, it is prudent to start now. AI safety research prepares us better for the future, by pre-emptively making AI beneficial to society, and reducing its risks.
This mission motivates research across many disciplines, from economics and law to technical areas like verification, validity, security and control. We’d love you to join!
Since it may take decades to complete this research, it is prudent to start now. AI safety research prepares us better for the future, by pre-emptively making AI beneficial to society, and reducing its risks.
This mission motivates research across many disciplines, from economics and law to technical areas like verification, validity, security and control. We’d love you to join!
View the community

Our content
Related posts
Here are various posts relating to our grantmaking work:

The Future of Life Institute announces grants program for existential risk reduction
The Future of Life Institute announces $25M grants program for existential risk reduction Emerging technologies have the potential to help […]
June 3, 2021

2015 AI Grant Recipients
2015 Project Grants Recommended for Funding Primary Investigator Project Title Amount Recommended Email Alex Aiken, Stanford University Verifying Deep Mathematical […]
January 25, 2016

New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial
Elon-Musk-backed program signals growing interest in new branch of artificial intelligence research July 1, 2015 Amid rapid industry investment in […]
October 28, 2015
Contact us
Let's put you in touch with the right person.
We do our best to respond to all incoming queries within three business days. Our team is spread across the globe, so please be considerate and remember that the person you are contacting may not be in your timezone.