Notice: You are viewing a detailed profile of an entity in our US Agency Mapping resource, in which we have compiled all information relevant for the regulation of advanced AI technologies in the US. To see an overview of all entities, return to the entity overview page.
Index
Department of Commerce (DoC)
International Trade Administration (ITA)
US Patent and Trade Administration (USPTO)
Bureau of Industry and Security (BIS)
National Institute of Standards and Technology (NIST)
US AI Safety Institute (USAISI)
National Telecommunication and Information Administration (NTIA)
Department of Energy (DoE)
Office of Cybersecurity, Energy Security, and Emergency Response (CESER)
Advanced Scientific Computing Research (ASCR)
Office of Critical and Emerging Technology (OCET)
Department of Homeland Security (DoHS)
Cybersecurity and & Infrastructure Security Agency (CISA)
Office of Cyber, Infrastructure, Risk, and Resilience (CIRR)
National Institute of Standards and Technology (NIST)
US AI Safety Institute (USAISI)
The U.S. AI Safety Institute (USAISI) aims to advance the science of AI safety to enable responsible AI innovation by developing methods to assess and mitigate risks of advanced AI systems. Its work includes creating benchmarks, evaluation tools, and safety guidelines for AI models and applications. USAISI will collaborate across government, industry, and academia to build a shared understanding of AI capabilities and potential harms.
Elizabeth Kelly - Director, U.S. Artificial Intelligence Safety Institute, and Paul Christiano - Head of AI Safety, U.S. Artificial Intelligence Safety Institute
Goals
1. Advancing AI safety science through research
This goal focuses on developing empirically grounded tests, benchmarks, and evaluations for AI models, systems, and agents. AISI aims to address both near-term and long-term AI safety challenges through the following research projects:
- Perform and coordinate technical research: Develop safety guidelines, tools, and techniques for issues like synthetic content detection, model security, and technical safeguards.
- Conduct pre-deployment testing, evaluation, validation, and verification (TEVV) of advanced models, systems, and agents: Assess potential risks before deployment using methods like automated capability evaluations and expert red-teaming.
- Conduct TEVV of advanced AI models, systems, and agents: Develop scientific understanding of existing risks related to individual rights, public safety, and national security.
2. Developing and disseminating AI safety practices
This goal focuses on translating scientific understanding into practical implementation of AI safety. The institute aims to provide stakeholders across the AI sector with high-quality, science-based information and tools for risk evaluation and mitigation, enabling informed decision-making through the following programs:
- Build and publish specific metrics, evaluation tools, methodological guidelines, protocols, and benchmarks for assessing risks of advanced AI across different domains and deployment contexts: AISI plans to develop and release guidelines and tools for TEVV of various risks. These resources will be designed for developers, deployers, and third-party independent evaluators. The guidelines will include specific evaluation protocols and may also introduce new benchmarks for assessing model capabilities.
- Develop and publish risk-based mitigation guidelines and safety mechanisms to support the responsible design, development, deployment, use, and governance of advanced AI models, systems, and agents: This project aims to create guidance on mitigating existing harms and addressing potential and emerging risks, including threats to public safety and national security.
3. Supporting institutions, communities, and coordination around AI safety
This goal focuses on promoting the global adoption of AI safety practices and fostering international collaboration so that its guidelines are implemented at a global scale through the following projects:
- Promote adoption of AISI guidelines, evaluations, and recommended AI safety measures and risk mitigations: AISI plans to initiate and support information-sharing, and collaboration with research labs, third-party evaluators, and experts across the spectrum of AI development, deployment, and use. The institute aims to transition voluntary commitments into actionable guidelines and promote the adoption of AI safety best practices. AISI also intends to support the ecosystem of third-party evaluators and contribute to scientific reports, articles, and guidance that can inform AI safety legislation or policy.
- Lead an inclusive, international network on the science of AI safety: AISI aims to serve as a partner for other AI Safety Institutes, national research organizations, and multilateral entities like the OECD and G7. The goal is to create commonly accepted scientific methodologies.
Programs
Artificial Intelligence Safety Institute Consortium (AISIC): The AISIC brings together over 280 organizations from industry, academia, government, and civil society to work on problems related to AI safety. Its primary focus is developing guidelines and standards for AI measurement and policy. Learn more here.
Index
Department of Commerce (DoC)
International Trade Administration (ITA)
US Patent and Trade Administration (USPTO)
Bureau of Industry and Security (BIS)
National Institute of Standards and Technology (NIST)
US AI Safety Institute (USAISI)
National Telecommunication and Information Administration (NTIA)
Department of Energy (DoE)
Office of Cybersecurity, Energy Security, and Emergency Response (CESER)
Advanced Scientific Computing Research (ASCR)
Office of Critical and Emerging Technology (OCET)
Department of Homeland Security (DoHS)
Cybersecurity and & Infrastructure Security Agency (CISA)
Office of Cyber, Infrastructure, Risk, and Resilience (CIRR)