Skip to content

Policy work

We aim to improve AI governance over civilian applications, autonomous weapons and in nuclear launch.

Introduction

Improving the governance of transformative technologies

The policy team at FLI works to reduce extreme, large-scale risks from transformative technologies by improving national and international governance of Artificial Intelligence (AI).

FLI has spearheaded numerous efforts to this end. Most notably, in 2017 we created the influential Asilomar AI principles, a set of governance principles signed by thousands of leading minds in AI research and industry. More recently, the UN Secretary General consulted FLI as the civil society ‘co-champion’ for AI recommendations on the Digital Cooperation Roadmap.

In the civilian domain, we advise the European Union on how to strengthen and future-proof their upcoming EU AI Act, and U.S. policymakers on how to best govern advanced AI systems. In the military domain, we advocate for a treaty on autonomous weapons at the United Nations and inform policymakers about the risks of incorporating AI systems into nuclear launch.

Our work

Policy projects

Mitigating the Risks of AI Integration in Nuclear Launch

Avoiding nuclear war is in the national security interest of all nations. We pursue a range of initiatives to reduce this risk. Our current focus is on mitigating the emerging risk of AI integration into nuclear command, control and communication.

Strengthening the European AI Act

Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.

Educating about Lethal Autonomous Weapons

Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.

Developing possible AI rules for the US

Our US policy team advises policymakers in Congress and Statehouses on how to ensure that AI systems are safe and beneficial.

Global AI governance at the UN

Our involvement with the UN's work spans several years and initiatives, including the Roadmap for Digital Cooperation and the Global Digital Compact (GDC).

UK AI Safety Summit

On 1-2 November 2023, the United Kingdom convened the first ever global government meeting focussed on AI Safety. In the run-up to the summit, FLI produced and published a document outlining key recommendations.
Our content

Latest policy papers

Chemical & Biological Weapons and Artificial Intelligence: Problem Analysis and US Policy Recommendations

February 2024

FLI Response to OMB: Request for Comments on AI Governance, Innovation, and Risk Management

February 2024

FLI Response to NIST: Request for Information on NIST’s Assignments under the AI Executive Order

February 2024

FLI Response to Bureau of Industry and Security (BIS): Request for Comments on Implementation of Additional Export Controls

February 2024

Load more

Geographical Focus

Where you can find us

We are a hybrid organisation. Most of our policy work takes place in the US (D.C. and California), the EU (Brussels) and at the UN (New York and Geneva).

United States

In the US, FLI works to increase federal spending on AI safety research and to strengthen the NIST AI Risk Management Framework.

European Union

In Europe, our focus is on strengthening the EU AI Act and encouraging European states to support a treaty on autonomous weapons.

United Nations

At the UN, FLI works to promote the adoption of a legally-binding instrument on autonomous weapons.
Key partners

Achievements

Some of the things we have achieved

Developed the AI Asilomar Principles

In 2017, FLI coordinated the development of the Asilomar AI Principles, one of the earliest and most influential sets of AI governance principles.
View the principles

AI recommendation in the UN digital cooperation roadmap

Our recommendations (3C) on the global governance of AI technologies were adopted in the UN Secretary-General's digital cooperation roadmap.
View the roadmap

Max Tegmark's testimony to the EU parliament

Our founder and board member Max Tegmark presented a testimony on the regulation of general-purpose AI systems in the EU parliament.
Watch the testimony
Our content

Featured posts

Here is a selection of posts relating to our policy work:

Disrupting the Deepfake Pipeline in Europe

Leveraging corporate criminal liability under the Violence Against Women Directive to safeguard against pornographic deepfake exploitation.
February 22, 2024

Exploration of secure hardware solutions for safe AI deployment

This collaboration between the Future of Life Institute and Mithril Security explores hardware-backed AI governance tools for transparency, traceability, and confidentiality.
November 30, 2023

Protect the EU AI Act

A last-ditch assault on the EU AI Act threatens to jeopardise one of the legislation's most important functions: preventing our most powerful AI models from causing widespread harm to society.
November 22, 2023

Miles Apart: Comparing key AI Act proposals

Our analysis shows that the recent non-paper drafted by Italy, France, and Germany largely fails to provide any provisions on foundation models or general purpose AI systems, and offers much less oversight and enforcement than the existing alternatives.
November 21, 2023

Can we rely on information sharing?

We have examined the Terms of Use of major General-Purpose AI system developers and found that they fail to provide assurances about the quality, reliability, and accuracy of their products or services.
October 26, 2023

AI Licensing for a Better Future: On Addressing Both Present Harms and Emerging Threats

This joint open letter by Encode Justice and the Future of Life Institute calls for the implementation of three concrete US policies in order to address current and future harms of AI.
October 25, 2023

Written Statement of Dr. Max Tegmark to the AI Insight Forum

The Future of Life Institute President addresses the AI Insight Forum on AI innovation and provides five US policy recommendations.
October 24, 2023

As Six-Month Pause Letter Expires, Experts Call for Regulation on Advanced AI Development

This week will mark six months since the open letter calling for a six month pause on giant AI experiments. Since then, a lot has happened. Our signatories reflect on what needs to happen next.
September 21, 2023

Contact us

Let's put you in touch with the right person.

We do our best to respond to all incoming queries within three business days. Our team is spread across the globe, so please be considerate and remember that the person you are contacting may not be in your timezone.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram