Skip to content

Project

Developing possible AI rules for the US

Our US policy team advises policymakers in Congress and Statehouses on how to ensure that AI systems are safe and beneficial.

Our main proposals

In April 2023, we published “Policymaking in the Pause,” a summary of our recommendations for early-stage AI governance policies. These proposals were discussed by FLI President Max Tegmark at the Senate AI Insight Forum on Innovation and have been reflected in several Congressional proposals.

Our recommendations for setting up robust AI governance infrastructure and monitoring the proliferation of powerful AI systems include:

  • Requiring AI developers to register with the federal government when building up large amounts of computing power or carrying out large training runs for advanced general-purpose AI systems.
  • Requiring that deployers obtain licences before releasing advanced general-purpose AI systems. Each license would be based on an independent assessment of system risks and capabilities, whereby the AI developer should demonstrate the safety of their model.
  • Establishing a dedicated authority within the federal government to oversee the development and use of general-purpose AI systems, and cooperate with existing agencies to supervise narrow AI systems within their specific domains.
  • Making advanced general-purpose AI system developers liable for harms caused by their systems. Policymakers should clarify that ‘Section 230 of the Communications Decency Act’ does not apply to content generated by AI systems, even if a third-party provided the prompt to generate that content. Section 230 is what social media companies rely on to avoid liability for content posted by users.
  • Increasing federal funding for both technical AI safety research and for countermeasures to identify and mitigate harms that emerge from misuse, malicious use, or the unforeseen behavior of advanced AI systems.

Other US recommendations

Our policy team offers concrete recommendations to guide the growing interest among US policymakers in how to manage emerging risks from AI across domains. We draw on both extensive in-house expertise and a broad network of experts.

For instance, we provided substantial feedback on the AI Risk Management Framework (AI RMF) as it was developed by the National Institute of Standards and Technology (NIST). The AI RMF is a collection of voluntary best practices for producing more trustworthy AI systems. Our feedback ensured that the RMF accounted for extreme and unacceptable risks, AI system loyalty, and the risk management of general-purpose AI systems.

We have also informed Congress’s discourse on integrating AI into the critical domains of healthcare, education, and labor, and developed specific US policy recommendations for combatting the unique cybersecurity risks associated with AI.

Key documents

Cybersecurity and AI: Problem Analysis and US Policy Recommendations

October 2023

Policymaking In The Pause

April 2023

Statement Regarding the Release of NIST’s AI RMF

January 2023

Response to the First Draft of the AI RMF

April 2022
View all policy documents
Our work

Other projects in this area

We work on a range of projects across a few key areas. See some of our other projects in this area of work:

Mitigating the Risks of AI Integration in Nuclear Launch

Avoiding nuclear war is in the national security interest of all nations. We pursue a range of initiatives to reduce this risk. Our current focus is on mitigating the emerging risk of AI integration into nuclear command, control and communication.

Strengthening the European AI Act

Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram