Skip to content

Policy work

We aim to improve AI governance over civilian applications, autonomous weapons and in nuclear launch.

Introduction

Improving the governance of transformative technologies

The policy team at FLI works to improve national and international governance of AI. FLI has spearheaded numerous efforts to this end.

In 2017 we created the influential Asilomar AI principles, a set of governance principles signed by thousands of leading minds in AI research and industry. More recently, our 2023 open letter caused a global debate on the rightful place of AI in our societies. FLI has given testimony at the U.S. Congress, the European Parliament, and other key jurisdictions.

In the civilian domain, we advise policymakers on how to best govern advanced AI systems. In the military domain, we advocate for a treaty on autonomous weapons at the United Nations and inform policymakers about the risks of incorporating AI systems into nuclear launch.

Featured project

Grading the safety practices of leading AI companies

Rapidly improving AI capabilities have increased interest in how companies report, assess and attempt to mitigate associated risks. The 2024 FLI AI Safety Index convened an independent panel of seven distinguished AI and governance experts to evaluate the safety practices of six leading general-purpose AI companies across six critical domains.

View the index
Our work

Policy projects

Combatting Deepfakes

2024 is rapidly turning into the Year of Fake. As part of a growing coalition of concerned organizations, FLI is calling on lawmakers to take meaningful steps to disrupt the AI-driven deepfake supply chain.

AI Safety Summits

Governments are increasingly cooperating to ensure AI Safety. FLI supports and encourages these efforts.

Developing possible AI rules for the US

Our US policy team advises policymakers in Congress and Statehouses on how to ensure that AI systems are safe and beneficial.

Engaging with AI Executive Orders

We provide formal input to agencies across the US federal government, including technical and policy expertise on a wide range of issues such as export controls, hardware governance, standard setting, procurement, and more.

AI Convergence: Risks at the Intersection of AI and Nuclear, Biological and Cyber Threats

The dual-use nature of AI systems can amplify the dual-use nature of other technologies—this is known as AI convergence. We provide policy expertise to policymakers in the United States in three key convergence areas: biological, nuclear, and cyber.

Strengthening the European AI Act

Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.

Educating about Lethal Autonomous Weapons

Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.

Global AI governance at the UN

Our involvement with the UN's work spans several years and initiatives, including the Roadmap for Digital Cooperation and the Global Digital Compact (GDC).
All our work
Our content

Latest policy papers

FLI AI Safety Index 2024

December 2024

FLI Interim Recommendations for the AI Action Summit

November 2024

EU Scientific Panel Feedback

November 2024

US AI Safety Institute codification (FAIIA vs. AIARA)

November 2024

Load more

All Documents

Geographical Focus

Where you can find us

We are a hybrid organisation. Most of our policy work takes place in the US (D.C. and California), the EU (Brussels) and at the UN (New York and Geneva).

United States

In the US, FLI participates in the US AI Safety Institute consortium and promotes AI legislation at state and federal levels.

European Union

In Europe, our focus is on strong EU AI Act implementation and encouraging European states to support a treaty on autonomous weapons.

United Nations

At the UN, FLI advocates for a treaty on autonomous weapons and a new international agency to govern AI.
Key partners

Achievements

Some of the things we have achieved

Developed the AI Asilomar Principles

In 2017, FLI coordinated the development of the Asilomar AI Principles, one of the earliest and most influential sets of AI governance principles.
View the principles

AI recommendation in the UN digital cooperation roadmap

Our recommendations (3C) on the global governance of AI technologies were adopted in the UN Secretary-General's digital cooperation roadmap.
View the roadmap

Max Tegmark's testimony to the EU parliament

Our founder and board member Max Tegmark presented a testimony on the regulation of general-purpose AI systems in the EU parliament.
Watch the testimony
Our content

Featured posts

Here is a selection of posts relating to our policy work:

FLI Statement on White House National Security Memorandum

Last week the White House released a National Security Memorandum concerning AI governance and risk management. The NSM issues guidance […]
28 October, 2024

Paris AI Safety Breakfast #2: Dr. Charlotte Stix

The second of our 'AI Safety Breakfasts' event series, featuring Dr. Charlotte Stix on model evaluations, deceptive AI behaviour, and the AI Safety and Action Summits.
14 October, 2024

US House of Representatives call for legal liability on Deepfakes

Recent statements from the US House of Representatives are a reminder of the urgent threat deepfakes present to our society, especially as we approach the U.S. presidential election.
1 October, 2024

Statement on the veto of California bill SB 1047

“The furious lobbying against the bill can only be reasonably interpreted in one way: these companies believe they should play by their own rules and be accountable to no one. This veto only reinforces that belief. Now is the time for legislation at the state, federal, and global levels to hold Big Tech to their commitments”
30 September, 2024

Panda vs. Eagle

FLI's Director of Policy on why the U.S. national interest is much better served by a cooperative than an adversarial strategy towards China.
27 September, 2024

US Federal Agencies: Mapping AI Activities

This guide outlines AI activities across the US Executive Branch, focusing on regulatory authorities, budgets, and programs.
9 September, 2024

Paris AI Safety Breakfast #1: Stuart Russell

The first of our 'AI Safety Breakfasts' event series, featuring Stuart Russell on significant developments in AI, AI research priorities, and the AI Safety Summits.
5 August, 2024

Artist Rights Alliance, Annie Lennox Speak Out with Ban Deepfakes Campaign

Lennox states: "we need to hold the tech companies whose AI models enable this harm accountable."
2 August, 2024
Our content

Contact us

Let's put you in touch with the right person.

We do our best to respond to all incoming queries within three business days. Our team is spread across the globe, so please be considerate and remember that the person you are contacting may not be in your timezone.
Please direct media requests and speaking invitations for Max Tegmark to press@futureoflife.org. All other inquiries can be sent to contact@futureoflife.org.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram