Project
Developing possible AI rules for the US
Our main proposals
In April 2023, we published “Policymaking in the Pause,” a summary of our recommendations for early-stage AI governance policies. These proposals were discussed by FLI President Max Tegmark at the Senate AI Insight Forum on Innovation and have been reflected in several Congressional proposals.
Our recommendations for setting up robust AI governance infrastructure and monitoring the proliferation of powerful AI systems include:
- Requiring AI developers to register with the federal government when building up large amounts of computing power or carrying out large training runs for advanced general-purpose AI systems.
- Requiring that deployers obtain licences before releasing advanced general-purpose AI systems. Each license would be based on an independent assessment of system risks and capabilities, whereby the AI developer should demonstrate the safety of their model.
- Establishing a dedicated authority within the federal government to oversee the development and use of general-purpose AI systems, and cooperate with existing agencies to supervise narrow AI systems within their specific domains.
- Making advanced general-purpose AI system developers liable for harms caused by their systems. Policymakers should clarify that ‘Section 230 of the Communications Decency Act’ does not apply to content generated by AI systems, even if a third-party provided the prompt to generate that content. Section 230 is what social media companies rely on to avoid liability for content posted by users.
- Increasing federal funding for both technical AI safety research and for countermeasures to identify and mitigate harms that emerge from misuse, malicious use, or the unforeseen behavior of advanced AI systems.
Other US recommendations
Our policy team offers concrete recommendations to guide the growing interest among US policymakers in how to manage emerging risks from AI across domains. We draw on both extensive in-house expertise and a broad network of experts.
For instance, we provided substantial feedback on the AI Risk Management Framework (AI RMF) as it was developed by the National Institute of Standards and Technology (NIST). The AI RMF is a collection of voluntary best practices for producing more trustworthy AI systems. Our feedback ensured that the RMF accounted for extreme and unacceptable risks, AI system loyalty, and the risk management of general-purpose AI systems.
We have also published recommendations on integrating AI into the critical domains of healthcare, education, and labor, and developed specific US policy recommendations for combatting the unique cybersecurity risks associated with AI.