Policy work
Introduction
Improving the governance of transformative technologies
The policy team at FLI works to improve national and international governance of AI. FLI has spearheaded numerous efforts to this end.
In 2017 we created the influential Asilomar AI principles, a set of governance principles signed by thousands of leading minds in AI research and industry. More recently, our 2023 open letter caused a global debate on the rightful place of AI in our societies. FLI has given testimony at the U.S. Congress, the European Parliament, and other key jurisdictions.
In the civilian domain, we advise policymakers on how to best govern advanced AI systems. In the military domain, we advocate for a treaty on autonomous weapons at the United Nations and inform policymakers about the risks of incorporating AI systems into nuclear launch.
![](https://futureoflife.org/wp-content/uploads/2022/06/EU-AI-Act-mockup.jpeg)
![](https://futureoflife.org/wp-content/uploads/2024/12/FLI-AI-Safety-Index-Press-Release-Thumbnail.png)
Grading the safety practices of leading AI companies
Rapidly improving AI capabilities have increased interest in how companies report, assess and attempt to mitigate associated risks. The 2024 FLI AI Safety Index convened an independent panel of seven distinguished AI and governance experts to evaluate the safety practices of six leading general-purpose AI companies across six critical domains.
Policy projects
![](https://futureoflife.org/wp-content/uploads/2024/02/Campaign_to_ban_Deepfakes_Thumbnail-1024x640.png)
Combatting Deepfakes
![](https://futureoflife.org/wp-content/uploads/2023/09/1222px-Bletchley_Park_Mansion-1024x603.jpg)
AI Safety Summits
![](https://futureoflife.org/wp-content/uploads/2022/06/NIST-framework-mockup-1-1024x768.png)
Developing possible AI rules for the US
![](https://futureoflife.org/wp-content/uploads/2024/05/1280px-View_of_Oval_Office_in_2017_Cropped-1024x683.jpg)
Engaging with AI Executive Orders
![](https://futureoflife.org/wp-content/uploads/2023/09/3-SB3-Artificial-Escalation-thumbnail-e1694506663778-1024x431.jpg)
AI Convergence: Risks at the Intersection of AI and Nuclear, Biological and Cyber Threats
![](https://futureoflife.org/wp-content/uploads/2022/06/EU-AI-Act-mockup-1024x1024.jpeg)
Strengthening the European AI Act
![](https://futureoflife.org/wp-content/uploads/2019/05/State-of-AI-1024x768.jpeg)
Educating about Lethal Autonomous Weapons
![](https://futureoflife.org/wp-content/uploads/2021/12/Emilia-at-Un-1024x547.png)
Global AI governance at the UN
Latest policy papers
Produced by us
![](https://futureoflife.org/wp-content/uploads/2025/02/AI-Action-Summit-Tool-AI-Explainer-Thumbnail.jpg)
Safety Standards Delivering Controllable and Beneficial AI Tools
![](https://futureoflife.org/wp-content/uploads/2025/02/Policy-Briefing-Responsible-AI-in-Nuclear-Domain-Thumbnail.jpg)
Framework for Responsible Use of AI in the Nuclear Domain
![](https://futureoflife.org/wp-content/uploads/2024/12/AI-Safety-Index-2024-Full-Report-11-Dec-24-Thumbnail.jpg)
FLI AI Safety Index 2024
![](https://futureoflife.org/wp-content/uploads/2024/11/FLI_AI_Action_Summit_Recommendations_Final_EN_Thumbnail.jpg)
FLI Recommendations for the AI Action Summit
Load more
Featuring our staff
Safety Standards Delivering Controllable and Beneficial AI Tools
Framework for Responsible Use of AI in the Nuclear Domain
A Taxonomy of Systemic Risks from General-Purpose AI
Effective Mitigations for Systemic Risks from General-Purpose AI
Load more
We provide high-quality policy resources to support policymakers
![](https://futureoflife.org/wp-content/uploads/2024/09/United_States_Capitol_west_front_edit2.jpg)
US Federal Agencies: Mapping AI Activities
![](https://futureoflife.org/wp-content/uploads/2024/10/EU_AI_Act_Coverpage_wide.jpg)
EU AI Act Explorer and Compliance Checker
![](https://futureoflife.org/wp-content/uploads/2024/10/Autonomous-Weapons-website-thumbnail-scaled-e1729007148323-1024x577.jpg)
Autonomous Weapons website
Geographical Focus
Where you can find us
![](https://futureoflife.org/wp-content/uploads/2018/07/us.png)
United States
![](https://futureoflife.org/wp-content/uploads/2022/08/Flag_of_Europe.svg_.webp)
European Union
![](https://futureoflife.org/wp-content/uploads/2022/08/Flag_of_the_United_Nations.svg_.png)
United Nations
Achievements
Some of the things we have achieved
![](https://futureoflife.org/wp-content/uploads/2017/01/principled_conversation.jpg)
Developed the AI Asilomar Principles
![](https://futureoflife.org/wp-content/uploads/2022/08/FLI-Website-Graphics.png)
AI recommendation in the UN digital cooperation roadmap
![](https://futureoflife.org/wp-content/uploads/2022/08/EU-parliament-max-tegmark.png)
Max Tegmark's testimony to the EU parliament
Featured posts
![](https://futureoflife.org/wp-content/uploads/2025/01/Grand_Palais_-_PA00088877_-_Bonhams_2014_-_Vue_densemble_-_003-1024x682.jpg)
Context and Agenda for the 2025 AI Action Summit
![](https://futureoflife.org/wp-content/uploads/2024/10/WhiteHouseSouthFacade.jpg)
FLI Statement on White House National Security Memorandum
![](https://futureoflife.org/wp-content/uploads/2024/10/FOL_BF_25-09-24-11-1024x684.jpg)
Paris AI Safety Breakfast #2: Dr. Charlotte Stix
![](https://futureoflife.org/wp-content/uploads/2024/10/United_States_House_of_Representatives_chamber-1024x537.jpg)
US House of Representatives call for legal liability on Deepfakes
![](https://futureoflife.org/wp-content/uploads/2024/09/SB1047-Veto-Statement-1024x576.jpg)
Statement on the veto of California bill SB 1047
![](https://futureoflife.org/wp-content/uploads/2024/09/US-Chine-UK-AI-Safety-Summit-Speakers.jpg)
Panda vs. Eagle
![](https://futureoflife.org/wp-content/uploads/2024/09/United_States_Capitol_west_front_edit2.jpg)
US Federal Agencies: Mapping AI Activities
![](https://futureoflife.org/wp-content/uploads/2024/08/FLI_AI_Safety_Breakfast_Stuart_Russell_Thumbnail-1024x576.jpg)