Frequently Asked Questions
What work does FLI do?
With a range of academic disciplines and professional and geographic backgrounds represented on our Core Team, FLI engages in grantmaking, policy research and advocacy, and an array of education and outreach projects.
Through our Grants, FLI offers financial support for promising work aligned with our mission. We currently have a $25M grants program on existential risk reduction, including PhD and Post-doctoral Fellowships for AI existential safety research and a grants competition for research into the Humanitarian Impacts of Nuclear War. We also ran AI and AGI Safety Grants programs in 2015 and 2018.
Our Policy work aims to improve AI governance and reduce the risk of nuclear war. At present this involves advocating for a treaty on autonomous weapons at the UN, for a more future-proof AI Act in the EU, and for a more robust AI Risk Management Framework in the U.S. Previously we were consulted by the UN Secretary General as civil society ‘co-champion’ for AI recommendations.
Finally, the Outreach team seeks to educate and inform diverse audiences about our cause areas, using social media, podcasts, videos, websites, Open Letters, and the Future of Life Award, which celebrates unsung heroes who made our world a better, safer place. Since founding the award in 2017, we have given this $50,000 prize to sixteen such inspiring individuals.
Over the past decade, FLI has also hosted many events, conferences and workshops. These events have contributed to a myriad of important outcomes, including the influential Asilomar Principles for AI governance and the mainstreaming of the AI safety research field.
Who funds FLI?
The Future of Life Institute is an apolitical non-profit funded by a range of individuals and organisations who share our goal to reduce extreme large-scale risks from transformative technologies. For more information, see our Funding page.
What organisations are you affiliated with?
Partnership on AI, a nonprofit coalition committed to the responsible use of artificial intelligence and The UN Secretary General's High-level Panel on Digital Cooperation - Roundtable 3C on Artificial Intelligence.
What is the role of your External Advisors?
We value advice from a diverse group of thinkers with expertise in science research and science communication. Their input helps to ensure that our approach to executing our mission is well-informed, strategic and adaptable. Our advisors have generously agreed to let us solicit their advice for free, because they believe in our mission. Although they offer us advice, they are not involved in our decision-making, for which the ultimate responsibility rests with our Board of Directors.
How do you decide your policy positions?
Our policy positions are decided by our Policy Team and our Board of Directors, based on the latest academic research and science, as well as community input. Although we value the generous support of our funders, they do not influence our positions.
Given your name, do you think about present-day issues?
Yes. We choose cause areas that are of concern for the longer-term future and future generations, but also of concern for people today, such as reducing the risk of nuclear war and supporting a treaty on lethal autonomous weapons systems. Similarly, our policy work to future-proof the EU AI Act seeks to protect people’s safety and rights in the here and now. Our outreach team spends considerable time educating the public about the latest developments in our cause areas, and explaining their relevance to people’s lives today.