Skip to content

Project

Strengthening the NIST AI Risk Management Framework

Our feedback on the first draft of the National Institute of Standards and Technology’s (NIST) AI risk management framework addressed extreme and unacceptable risks, loyalty of AI systems and the risk management of general purpose systems.

What is the AI Risk Management Framework?

The ‘AI Risk Management Framework’ (AI RMF) is a tool that developers can use to determine if their systems can be trusted. Through the National AI Initiative Act of 2020, the US Congress has asked the National Institute of Standards and Technology (NIST) to “develop (..) a voluntary risk management framework for trustworthy artificial intelligence systems.”

The creation of the Framework started in July 2021 and the first version was published in January 2023. An overview of the project can be found on this NIST page.

Why does the Framework matter?

Companies are not required to use the Framework, but a failure to implement it can undermine consumer trust and may expose them to liability for damages when their AI systems cause harm. The RMF will also help researchers consider the ethical implications of their work, and provides guidance to US federal agencies on how to comply with internal government guidance on safe AI.

Our feedback to NIST

FLI has been actively involved in the development of the Framework. In Congress, FLI strongly supported the creation of the authorization for the AI RMF, and we continue to provide input and feedback to NIST through our written contributions and participation in workshops.

Our full feedback is available below:

Workshops

Documents

Statement Regarding the Release of NIST's AI RMF

January 2023

Response to the First Draft of the AI RMF

April 2022

Response to the RFI: Artificial Intelligence Risk Management Framework

April 2022

FLI Response to NIST Concept Paper

January 2022

FLI Response to Trust and AI document

December 2021
Our work

Other projects

If you enjoyed this, you also might like:

Mitigating the Risks of AI Integration in Nuclear Launch

Avoiding nuclear war is in the national security interest of all nations. We pursue a range of initiatives to reduce this risk. Our current focus is on mitigating the emerging risk of AI integration into nuclear command, control and communication.

Educating about Lethal Autonomous Weapons

Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.

Sign up for the Future of Life Institute newsletter

Join 20,000+ others receiving periodic updates on our work and cause areas.
View previous editions
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram