Skip to content

Grantmaking work

Supporting vital cutting-edge work with a wise, future-oriented mindset.


Financial support for promising work aligned with our mission.

Crises like COVID-19 show us that our civilisation is fragile, and needs to plan ahead better. FLI’s grants are for those who take this fragility seriously, who wish to study the risks from ever more powerful technologies and develop strategies for reducing them. The goal is to win the wisdom race: the race between the growing power of our technology and the wisdom with which we manage it.

Grants process

FLI offers financial support in three ways. Any questions about grants can be sent to

  1. Requests for Proposals (RFPs), which we publish on our website on a regular basis. Examples include our PhD and Post-doctoral Fellowships into AI Existential Safety, and our grants program into the Humanitarian Impacts of Nuclear War. Grants are awarded on the basis of recommendations from external reviewers.
  2. Institutional support for promising work aligned with our mission. We have partnered with the 'Survival and Flourishing Fund' to offer these grants by way of a pilot. Our funding priorities and grant recommendations are published for each cycle. Please submit your application with the fund if you are interested in this grant type.
  3. Collaborations in direct support of FLI internal projects and initiatives. For example, FLI has funded film production on autonomous weapons at partner organisations. We do not accept solicitations for this type of grant making.
Grant programs

All our grant programs

Open programs

PhD Fellowships

Open for submissions

Postdoctoral Fellowships

Open for submissions

Completed programs

2023 Grants

Funds allocated

2022 Grants


Nuclear War Research

Funds allocated

AI Safety Community

A community dedicated to ensuring AI is developed safely.

The way to ensure a better, safer future with AI is not to impede the development of this new technology but to accelerate our wisdom in handling it by supporting AI safety research.

Since it may take decades to complete this research it is prudent to start now. AI safety research prepares us better for the future by pre-emptively making AI beneficial to society and reducing its risks.

This mission motivates research across many disciplines, from economics and law to technical areas like verification, validity, security, and control. We’d love you to join!
View the community
Our content

Related posts

Here are various posts relating to our grantmaking work:

The Future of Life Institute announces grants program for existential risk reduction

The Future of Life Institute announces $25M grants program for existential risk reduction Emerging technologies have the potential to help […]
June 3, 2021

2015 AI Grant Recipients

2015 Project Grants Recommended for Funding Primary Investigator Project Title Amount Recommended Email Alex Aiken, Stanford University Verifying Deep Mathematical […]
January 25, 2016

New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial

Elon-Musk-backed program signals growing interest in new branch of artificial intelligence research July 1, 2015 Amid rapid industry investment in […]
October 28, 2015

Contact us

Let's put you in touch with the right person.

We do our best to respond to all incoming queries within three business days. Our team is spread across the globe, so please be considerate and remember that the person you are contacting may not be in your timezone.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram