Project
Strengthening the NIST AI Risk Management Framework

What is the AI Risk Management Framework?
The ‘AI Risk Management Framework’ (AI RMF) is a tool that developers can use to determine if their systems can be trusted. Through the National AI Initiative Act of 2020, the US Congress has asked the National Institute of Standards and Technology (NIST) to “develop (..) a voluntary risk management framework for trustworthy artificial intelligence systems.”
The creation of the Framework started in July 2021 and the first version was published in January 2023. An overview of the project can be found on this NIST page.
Why does the Framework matter?
Companies are not required to use the Framework, but a failure to implement it can undermine consumer trust and may expose them to liability for damages when their AI systems cause harm. The RMF will also help researchers consider the ethical implications of their work, and provides guidance to US federal agencies on how to comply with internal government guidance on safe AI.
Our feedback to NIST
FLI has been actively involved in the development of the Framework. In Congress, FLI strongly supported the creation of the authorization for the AI RMF, and we continue to provide input and feedback to NIST through our written contributions and participation in workshops.
Our full feedback is available below:
Workshops
- Panel 1 of second workshop on the AI RMF (March 2022)
- Panel 1.3 - Building the NIST AI Risk Management Framework: Workshop #3 (October 2022)
Documents

Statement Regarding the Release of NIST's AI RMF

Response to the First Draft of the AI RMF

Response to the RFI: Artificial Intelligence Risk Management Framework

FLI Response to NIST Concept Paper

FLI Response to Trust and AI document
Other projects

Mitigating the Risks of AI Integration in Nuclear Launch
