Skip to content

Our work

An overview of the type of work we do, and all of our current and past projects.

Work areas

Our areas of work

We work on projects across a few distinct areas:


We perform policy advocacy in the United States, European Union, and United Nations.
Our Policy work


We work on projects which aim to guide humanity towards the beneficial outcomes made possible by transformative technologies.
Our Futures work


We produce educational materials aimed at informing public discourse, as well as encouraging people to get involved.
Our Outreach work


We provide grants to individuals and organisations working on projects that further our mission.
Our Grant Programs


We convene leaders of the relevant fields to discuss ways of ensuring the safe development and use of powerful technologies.
Our Events

Our achievements

Our most important contributions

Here are a few of our proudest achievements:

Hosted the first AI Safety conferences

We were the first to convene leading figures in the field of AI to discuss our concerns about potential safety risks of the emerging technology.
View our events

Created the first AI Safety grant program

From 2015-2017, we ran the first ever grant program dedicated to funding AI Safety projects. We currently offer a range of grant opportunities for projects that forward our mission.
View our grants

Developed the AI Asilomar Principles

In 2017, FLI coordinated the development of the Asilomar AI Principles, one of the earliest and most influential sets of AI governance principles.
View the principles

Celebrated 18 unsung heroes with Future of Life Awards

Every year since 2017, the Future of Life Award has celebrated the contributions of people who helped preserve the prospects of life.
See the award

Produced viral video series raising the alarm on lethal autonomous weapons

We produced two short films, with a combined 75+ million views, depicting a world in which lethal autonomous weapons have been allowed to proliferate.
Watch the videos

AI recommendation in the UN digital cooperation roadmap

Our recommendations on the global governance of AI technologies were adopted in the UN Secretary-General's digital cooperation roadmap.

View the roadmap


What we're working on

Here is an overview of all the projects we are working on right now:

Protecting Against Deepfakes

2024 is rapidly turning into the Year of Fake. As part of a growing coalition of concerned organizations, FLI is calling on lawmakers to take meaningful steps to disrupt the AI-driven deepfake supply chain.

AI Safety Summits

Governments are increasingly cooperating to ensure AI Safety. FLI supports and encourages these efforts.

Superintelligence Imagined

A contest for the best creative educational materials on superintelligence, its associated risks, and the implications of this technology for our world. 5 prizes at $10,000 each.

Developing possible AI rules for the US

Our US policy team advises policymakers in Congress and Statehouses on how to ensure that AI systems are safe and beneficial.

Engaging with the AI Executive Order

We provide formal input to agencies across the US federal government, including technical and policy expertise on a wide range of issues such as export controls, hardware governance, standard setting, procurement, and more.

Perspectives of Traditional Religions on Positive AI Futures

Most of the global population participates in a traditional religion. Yet the perspectives of these religions are largely absent from strategic AI discussions. This initiative aims to support religious groups to voice their faith-specific concerns and hopes for a world with AI, and work with them to resist the harms and realise the benefits.

The Elders Letter on Existential Threats

The Elders, the Future of Life Institute and a diverse range of preeminent public figures are calling on world leaders to urgently address the ongoing harms and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI.

Realising Aspirational Futures – New FLI Grants Opportunities

We are opening two new funding opportunities to support research into the ways that artificial intelligence can be harnessed safely to make the world a better place.

The Windfall Trust

The Windfall Trust is an ambitious initiative aimed at diligencing and establishing a robust international institution that could provide universal basic assets in the event of a windfall generated by advances in AI.

AI Convergence: Risks at the Intersection of AI and Nuclear, Biological and Cyber Threats

The dual-use nature of AI systems can amplify the dual-use nature of other technologies—this is known as AI convergence. We provide policy expertise to policymakers in the United States in three key convergence areas: biological, nuclear, and cyber.

Strengthening the European AI Act

Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.

Imagine A World Podcast

Can you imagine a world in 2045 where we manage to avoid the climate crisis, major wars, and the potential harms of artificial intelligence? Our new podcast series explores ways we could build a more positive future, and offers thought provoking ideas for how we might get there.

Educating about Lethal Autonomous Weapons

Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.

Artificial Escalation

Our fictional film depicts a world where artificial intelligence ('AI') is integrated into nuclear command, control and communications systems ('NC3') with terrifying results.

Global AI governance at the UN

Our involvement with the UN's work spans several years and initiatives, including the Roadmap for Digital Cooperation and the Global Digital Compact (GDC).

Worldbuilding Competition

The Future of Life Institute accepted entries from teams across the globe, to compete for a prize purse of up to $100,000 by designing visions of a plausible, aspirational future that includes strong artificial intelligence.

Future of Life Award

Every year, the Future of Life Award is given to one or more unsung heroes who have made a significant contribution to preserving the future of life.

Future of Life Institute Podcast

A podcast dedicated to hosting conversations with some of the world's leading thinkers and doers in the field of emerging technology and risk reduction. 140+ episodes since 2015, 4.8/5 stars on Apple Podcasts.

Related pages

Were you looking for something else?

Here are a couple of other pages you might have been looking for:

Our mission

Read about our mission and our core principles.
View page

Take action

No matter your level of experience or seniority, there is something you can do to help us ensure the future of life is positive.
View page

Events work

We convene leaders of the relevant fields to discuss ways of ensuring the safe development and use of powerful technologies.
View page

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram