Skip to content
Recent updates from us
Plus: Facing public scrutiny, AI billionaires back new super PAC; our new $100K Keep the Future Human creative contest; Tomorrow's AI; and more.
4 September, 2025
Plus: Update on EU guidelines; the recent AI Security Forum; how AI increases nuclear risk; and more.
1 August, 2025
Plus: The OpenAI Files; creepy new InsideAI video; and more.
3 July, 2025
Hear from us every month
Join 40,000+ other newsletter subscribers for monthly updates on the work we’re doing to safeguard our shared futures.

Featured videos

The best recent content from us and our partners:
More videos

Featured projects

Read about some of our current featured projects:
Recently announced

Creative Contest: Keep The Future Human

$100,000+ in prizes for creative digital media that engages with the essay's key ideas, helps them to reach a wider range of people, and motivates action in the real world.

FLI AI Safety Index: Summer 2025 Edition

Seven AI and governance experts evaluate the safety practices of six leading general-purpose AI companies.

Recommendations for the U.S. AI Action Plan

The Future of Life Institute proposal for President Trump’s AI Action Plan. Our recommendations aim to protect the presidency from AI loss-of-control, promote the development of AI systems free from ideological or social agendas, protect American workers from job loss and replacement, and more.

AI Safety Summits

Governments are exploring collaboration on navigating a world with advanced AI. FLI provides them with advice and support.

FLI AI Safety Index: Summer 2025 Edition

Seven AI and governance experts evaluate the safety practices of six leading general-purpose AI companies.

Recommendations for the U.S. AI Action Plan

The Future of Life Institute proposal for President Trump’s AI Action Plan. Our recommendations aim to protect the presidency from AI loss-of-control, promote the development of AI systems free from ideological or social agendas, protect American workers from job loss and replacement, and more.

AI Safety Summits

Governments are exploring collaboration on navigating a world with advanced AI. FLI provides them with advice and support.

AI’s Role in Reshaping Power Distribution

Advanced AI systems are set to reshape the economy and power structures in society. They offer enormous potential for progress and innovation, but also pose risks of concentrated control, unprecedented inequality, and disempowerment. To ensure AI serves the public good, we must build resilient institutions, competitive markets, and systems that widely share the benefits.

Envisioning Positive Futures with Technology

Storytelling has a significant impact on informing people's beliefs and ideas about humanity's potential future with technology. While there are many narratives warning of dystopia, positive visions of the future are in short supply. We seek to incentivize the creation of plausible, aspirational, hopeful visions of a future we want to steer towards.

Perspectives of Traditional Religions on Positive AI Futures

Most of the global population participates in a traditional religion. Yet the perspectives of these religions are largely absent from strategic AI discussions. This initiative aims to support religious groups to voice their faith-specific concerns and hopes for a world with AI, and work with them to resist the harms and realise the benefits.

AI’s Role in Reshaping Power Distribution

Advanced AI systems are set to reshape the economy and power structures in society. They offer enormous potential for progress and innovation, but also pose risks of concentrated control, unprecedented inequality, and disempowerment. To ensure AI serves the public good, we must build resilient institutions, competitive markets, and systems that widely share the benefits.

Envisioning Positive Futures with Technology

Storytelling has a significant impact on informing people's beliefs and ideas about humanity's potential future with technology. While there are many narratives warning of dystopia, positive visions of the future are in short supply. We seek to incentivize the creation of plausible, aspirational, hopeful visions of a future we want to steer towards.

Perspectives of Traditional Religions on Positive AI Futures

Most of the global population participates in a traditional religion. Yet the perspectives of these religions are largely absent from strategic AI discussions. This initiative aims to support religious groups to voice their faith-specific concerns and hopes for a world with AI, and work with them to resist the harms and realise the benefits.

Digital Media Accelerator

The Digital Media Accelerator supports digital content from creators raising awareness and understanding about ongoing AI developments and issues.

Keep The Future Human

Why and how we should close the gates to AGI and superintelligence, and what we should build instead | A new essay by Anthony Aguirre, Executive Director of FLI.

Multistakeholder Engagement for Safe and Prosperous AI

FLI is launching new grants to educate and engage stakeholder groups, as well as the general public, in the movement for safe, secure and beneficial AI.

Digital Media Accelerator

The Digital Media Accelerator supports digital content from creators raising awareness and understanding about ongoing AI developments and issues.

Keep The Future Human

Why and how we should close the gates to AGI and superintelligence, and what we should build instead | A new essay by Anthony Aguirre, Executive Director of FLI.

Multistakeholder Engagement for Safe and Prosperous AI

FLI is launching new grants to educate and engage stakeholder groups, as well as the general public, in the movement for safe, secure and beneficial AI.

AI Existential Safety Community

A community dedicated to ensuring AI is developed safely, including both faculty and AI researchers. Members are invited to attend meetings, participate in an online community, and apply for travel support.

Fellowships

Since 2021 we have offered PhD and Postdoctoral fellowships in Technical AI Existential Safety. In 2024, we launched a PhD fellowship in US-China AI Governance.

RFPs, Contests, and Collaborations

Requests for Proposals (RFPs), public contests, and collaborative grants in direct support of FLI internal projects and initiatives.

AI Existential Safety Community

A community dedicated to ensuring AI is developed safely, including both faculty and AI researchers. Members are invited to attend meetings, participate in an online community, and apply for travel support.

Fellowships

Since 2021 we have offered PhD and Postdoctoral fellowships in Technical AI Existential Safety. In 2024, we launched a PhD fellowship in US-China AI Governance.

RFPs, Contests, and Collaborations

Requests for Proposals (RFPs), public contests, and collaborative grants in direct support of FLI internal projects and initiatives.

Newsletter

Regular updates about the technologies shaping our world

Every month, we bring 40,000+ subscribers the latest news on how emerging technologies are transforming our world. It includes a summary of major developments in our focus areas, and key updates on the work we do.

Subscribe to our newsletter to receive these highlights at the end of each month.

Recent editions

Plus: Facing public scrutiny, AI billionaires back new super PAC; our new $100K Keep the Future Human creative contest; Tomorrow's AI; and more.
4 September, 2025
Plus: Update on EU guidelines; the recent AI Security Forum; how AI increases nuclear risk; and more.
1 August, 2025
Plus: The OpenAI Files; creepy new InsideAI video; and more.
3 July, 2025
Plus: Updates on the EU AI Act Code of Practice; the Singapore Consensus; open letter from Evangelical leaders; and more.
31 May, 2025
View all

Latest content

The most recent content we have published:

Featured content

We must not build AI to replace humans.
A new essay by Anthony Aguirre, Executive Director of the Future of Life Institute
Humanity is on the brink of developing artificial general intelligence that exceeds our own. It's time to close the gates on AGI and superintelligence... before we lose control of our future.
Read the essay ->

Posts

Are we close to an intelligence explosion?

AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.
21 March, 2025

The Impact of AI in Education: Navigating the Imminent Future

What must be considered to build a safe but effective future for AI in education, and for children to be safe online?
13 February, 2025

Context and Agenda for the 2025 AI Action Summit

The AI Action Summit will take place in Paris from 10-11 February 2025. Here we list the agenda and key deliverables.
31 January, 2025

A Buddhist Perspective on AI: Cultivating freedom of attention and true diversity in an AI future

The AI-facilitated intelligence revolution is claimed by some to be setting humanity on a glidepath into utopian futures of nearly effortless satisfaction and frictionless choice. We should beware.
20 January, 2025
View all

Use your voice

Protect what's human.
Big Tech is racing to build increasingly powerful and uncontrollable AI systems designed to replace humans. You have the power to do something about it.
Take action today to protect our future:
Take Action ->

Our people

A team committed to the future of life.
Our staff represents a diverse range of expertise, having worked in academia, for government and in industry. Their background range from machine learning to medicine and everything in between.
Meet our team
Open Roles

Our History

We’ve been working to safeguard humanity’s future since 2014.

Learn about FLI’s work and achievements since its founding, including historic conferences, grant programs, and open letters that have shaped the course of technology.

Explore our history ->

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram