Policy work
Introduction
Improving the governance of transformative technologies
The policy team at FLI works to improve national and international governance of AI. FLI has spearheaded numerous efforts to this end.
In 2017 we created the influential Asilomar AI principles, a set of governance principles signed by thousands of leading minds in AI research and industry. More recently, our 2023 open letter caused a global debate on the rightful place of AI in our societies. FLI has given testimony at the U.S. Congress, the European Parliament, and other key jurisdictions.
In the civilian domain, we advise policymakers on how to best govern advanced AI systems. In the military domain, we advocate for a treaty on autonomous weapons at the United Nations and inform policymakers about the risks of incorporating AI systems into nuclear launch.

Grading the safety practices of leading AI companies
Rapidly improving AI capabilities have increased interest in how companies report, assess and attempt to mitigate associated risks. The 2024 FLI AI Safety Index convened an independent panel of seven distinguished AI and governance experts to evaluate the safety practices of six leading general-purpose AI companies across six critical domains.
Policy projects
Combatting Deepfakes

AI Safety Summits

Developing possible AI rules for the US

Engaging with AI Executive Orders
AI Convergence: Risks at the Intersection of AI and Nuclear, Biological and Cyber Threats

Strengthening the European AI Act

Educating about Lethal Autonomous Weapons

Global AI governance at the UN
Latest policy papers
FLI AI Safety Index 2024
FLI Interim Recommendations for the AI Action Summit

EU Scientific Panel Feedback

US AI Safety Institute codification (FAIIA vs. AIARA)
Load more
Geographical Focus
Where you can find us

United States

European Union

United Nations
Achievements
Some of the things we have achieved

Developed the AI Asilomar Principles

AI recommendation in the UN digital cooperation roadmap

Max Tegmark's testimony to the EU parliament
Featured posts

FLI Statement on White House National Security Memorandum

Paris AI Safety Breakfast #2: Dr. Charlotte Stix

US House of Representatives call for legal liability on Deepfakes

Statement on the veto of California bill SB 1047

Panda vs. Eagle

US Federal Agencies: Mapping AI Activities
Paris AI Safety Breakfast #1: Stuart Russell







