Skip to content

Policy and Research

We aim to improve the governance of artificial intelligence, and its intersection with biological, nuclear and cyberrisk.

Introduction

Improving the governance of transformative technologies

The policy team at FLI works to improve national and international governance of AI.

In 2017 we created the influential Asilomar AI principles, a set of governance principles signed by thousands of leading minds in AI research and industry. More recently, our 2023 open letter caused a global debate on the rightful place of AI in our societies. FLI regularly participates in intergovernmental conferences, and advises governments around the world on questions of AI governance.

Spotlight

How might AI transform our world?

Tomorrow's AI is a scrollytelling site with 13 interactive, research-backed scenarios showing how advanced AI could transform the world—for better or worse. The project blends realism with foresight to illustrate both the challenges of steering toward a positive future and the opportunities if we succeed. Readers are invited to form their own views on which paths humanity should pursue.

View the site
Our work

Project database

Perspectives of Traditional Religions on Positive AI Futures

Most of the global population participates in a traditional religion. Yet the perspectives of these religions are largely absent from strategic AI discussions. This initiative aims to support religious groups to voice their faith-specific concerns and hopes for a world with AI, and work with them to resist the harms and realise the benefits.

Promoting a Global AI Agreement

We need international coordination so that AI's benefits reach across the globe, not just concentrate in a few places. The risks of advanced AI won't stay within borders, but will spread globally and affect everyone. We should work towards an international governance framework that prevents the concentration of benefits in a few places and mitigates global risks of advanced AI.

Recommendations for the U.S. AI Action Plan

The Future of Life Institute proposal for President Trump’s AI Action Plan. Our recommendations aim to protect the presidency from AI loss-of-control, promote the development of AI systems free from ideological or social agendas, protect American workers from job loss and replacement, and more.

FLI AI Safety Index: Winter 2025 Edition

Eight AI and governance experts evaluate the safety practices of leading general-purpose AI companies.

AI Convergence: Risks at the Intersection of AI and Nuclear, Biological and Cyber Threats

The dual-use nature of AI systems can amplify the dual-use nature of other technologies—this is known as AI convergence. We provide policy expertise to policymakers in the United States in three key convergence areas: biological, nuclear, and cyber.

AI Safety Summits

Governments are exploring collaboration on navigating a world with advanced AI. FLI provides them with advice and support.

Implementing the European AI Act

Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.

Educating about Autonomous Weapons

Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.
All our work
Our content

Latest policy and research papers

Produced by us

AI Safety Index: Winter 2025 (2-Page Summary)

December 2025

Control Inversion

November 2025

Embedded Off-Switches for AI Compute

September 2025

AI Safety Index: Summer 2025 (2-Page Summary)

July 2025

Load more

All Documents

Featuring our staff and fellows

AI Benefit-Sharing Framework: Balancing Access and Safety

Sumaya Nur Adan, Joanna Wiaterek, Varun Sen Bahl, Ima Bello, Luise Eder, José Jaime Villalobos, Anna Yelizarova, et al.
December 2025

Examining Popular Arguments Against AI Existential Risk: A Philosophical Analysis

Torben Swoboda, Risto Uuk, Lode Lauwaert, Andrew Peter Rebera, Ann-Katrien Oimann, Bartlomiej Chomanski, Carina Prunkl
November 2025

A Blueprint for Multinational Advanced AI Development

Adrien Abecassis, Jonathan Barry, Ima Bello, Yoshua Bengio, Antonin Bergeaud, Yann Bonnet, Philipp Hacker, Ben Harack, Sophia Hatz, et al.
November 2025

Looking ahead: Synergies between the EU AI Office and UK AISI

Lara Thurnherr, Risto Uuk, Tekla Emborg, Marta Ziosi, Isabella Wilkinson, Morgan Simpson, Renan Araujo and Charles Martinet
March 2025

Load more

Resources

We provide high-quality policy resources to support policymakers

US Federal Agencies: Mapping AI Activities

This guide outlines AI activities across the US Executive Branch, focusing on regulatory authorities, budgets, and programs.
9 September, 2024

EU AI Act Explorer and Compliance Checker

Browse the full AI Act text online, and discover how the AI Act will affect you in 10 minutes by answering a series of straightforward questions.
15 January, 2024

Autonomous Weapons website

The era in which algorithms decide who lives and who dies is upon us. We must act now to prohibit and regulate these weapons.
20 November, 2017

Geographical Focus

Where you can find us

We are a hybrid organisation. Most of our policy work takes place in the US (D.C. and California), the EU (Brussels) and at the UN (New York and Geneva).

United States

In the US, FLI participates in the US AI Safety Institute consortium and promotes AI legislation at state and federal levels.

European Union

In Europe, our focus is on strong EU AI Act implementation and encouraging European states to support a treaty on autonomous weapons.

United Nations

At the UN, FLI advocates for a treaty on autonomous weapons and a new international agency to govern AI.
Key partners
Our content

Featured posts

Here is a selection of posts relating to our policy work:

Michael Kleinman reacts to breakthrough AI safety legislation

FLI celebrates a landmark moment for the AI safety movement and highlights its growing momentum
3 October, 2025

Context and Agenda for the 2025 AI Action Summit

The AI Action Summit will take place in Paris from 10-11 February 2025. Here we list the agenda and key deliverables.
31 January, 2025

FLI Statement on White House National Security Memorandum

Last week the White House released a National Security Memorandum concerning AI governance and risk management. The NSM issues guidance […]
28 October, 2024

Paris AI Safety Breakfast #2: Dr. Charlotte Stix

The second of our 'AI Safety Breakfasts' event series, featuring Dr. Charlotte Stix on model evaluations, deceptive AI behaviour, and the AI Safety and Action Summits.
14 October, 2024

US House of Representatives call for legal liability on Deepfakes

Recent statements from the US House of Representatives are a reminder of the urgent threat deepfakes present to our society, especially as we approach the U.S. presidential election.
1 October, 2024

Statement on the veto of California bill SB 1047

“The furious lobbying against the bill can only be reasonably interpreted in one way: these companies believe they should play by their own rules and be accountable to no one. This veto only reinforces that belief. Now is the time for legislation at the state, federal, and global levels to hold Big Tech to their commitments”
30 September, 2024

Panda vs. Eagle

FLI's Director of Policy on why the U.S. national interest is much better served by a cooperative than an adversarial strategy towards China.
27 September, 2024

US Federal Agencies: Mapping AI Activities

This guide outlines AI activities across the US Executive Branch, focusing on regulatory authorities, budgets, and programs.
9 September, 2024
Our content

Contact us

Let's put you in touch with the right person.

We do our best to respond to all incoming queries within three business days. Our team is spread across the globe, so please be considerate and remember that the person you are contacting may not be in your timezone.
Please direct media requests and speaking invitations for Max Tegmark to press@futureoflife.org. All other inquiries can be sent to contact@futureoflife.org.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and focus areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram