Policy work
Introduction
Improving the governance of transformative technologies
The policy team at FLI works to improve national and international governance of AI. FLI has spearheaded numerous efforts to this end.
In 2017 we created the influential Asilomar AI principles, a set of governance principles signed by thousands of leading minds in AI research and industry. More recently, our 2023 open letter caused a global debate on the rightful place of AI in our societies. FLI has given testimony at the U.S. Congress, the European Parliament, and other key jurisdictions.
In the civilian domain, we advise policymakers on how to best govern advanced AI systems. In the military domain, we advocate for a treaty on autonomous weapons at the United Nations and inform policymakers about the risks of incorporating AI systems into nuclear launch.


Grading the safety practices of leading AI companies
Rapidly improving AI capabilities have increased interest in how companies report, assess and attempt to mitigate associated risks. The 2024 FLI AI Safety Index convened an independent panel of seven distinguished AI and governance experts to evaluate the safety practices of six leading general-purpose AI companies across six critical domains.
Policy projects

Combatting Deepfakes

AI Safety Summits

Developing possible AI rules for the US

Engaging with AI Executive Orders

AI Convergence: Risks at the Intersection of AI and Nuclear, Biological and Cyber Threats

Strengthening the European AI Act

Educating about Lethal Autonomous Weapons

Global AI governance at the UN
Latest policy papers
Produced by us

Safety Standards Delivering Controllable and Beneficial AI Tools

Framework for Responsible Use of AI in the Nuclear Domain

FLI AI Safety Index 2024

FLI Recommendations for the AI Action Summit
Load more
Featuring our staff
Safety Standards Delivering Controllable and Beneficial AI Tools
Framework for Responsible Use of AI in the Nuclear Domain
A Taxonomy of Systemic Risks from General-Purpose AI
Effective Mitigations for Systemic Risks from General-Purpose AI
Load more
We provide high-quality policy resources to support policymakers

US Federal Agencies: Mapping AI Activities

EU AI Act Explorer and Compliance Checker

Autonomous Weapons website
Geographical Focus
Where you can find us

United States

European Union

United Nations
Achievements
Some of the things we have achieved

Developed the AI Asilomar Principles

AI recommendation in the UN digital cooperation roadmap

Max Tegmark's testimony to the EU parliament
Featured posts

Context and Agenda for the 2025 AI Action Summit

FLI Statement on White House National Security Memorandum

Paris AI Safety Breakfast #2: Dr. Charlotte Stix

US House of Representatives call for legal liability on Deepfakes

Statement on the veto of California bill SB 1047

Panda vs. Eagle

US Federal Agencies: Mapping AI Activities
