The SaaS Law ClinicNicole G, Esq.
← Glossary
AI governance

NIST AI Risk Management Framework

Also known as: NIST AI RMF · AI RMF · NIST AI Risk Management Framework

The voluntary US framework for managing AI risk, organized around four functions: Govern, Map, Measure, and Manage.

The NIST AI Risk Management Framework is the voluntary US framework, published by the National Institute of Standards and Technology in January 2023, for managing risk in the design, development, deployment, and use of AI systems. Version 1.0 is the current canonical reference. A Generative AI Profile published in 2024 layers on top of it for AI systems that produce text, image, code, audio, or video output.

The framework is organized around four functions. Govern, which sets up the policies, accountability, and culture that make AI risk management possible. Map, which contextualizes the AI system, its purpose, and its risks. Measure, which assesses the identified risks using qualitative and quantitative methods. Manage, which prioritizes and responds to the risks based on impact. Each function has subcategories with concrete practices an organization is expected to implement.

Unlike ISO/IEC 42001, NIST AI RMF is not a certification standard. There is no auditor, no badge, no formal compliance status. It is a framework organizations adopt voluntarily, and its value is in the discipline it imposes rather than in any external recognition. That said, federal agencies, state regulators, and enterprise procurement teams increasingly reference NIST AI RMF as a baseline for what good AI risk management looks like.

In practice, organizations often map their AI Use Policy and AI vendor diligence to NIST AI RMF as a structuring framework, then pursue ISO 42001 certification as the external recognition layer on top. The two are complementary rather than competing.

Train this into your team’s playbook.

The corporate training program turns terms like this into the operational discipline your in-house team negotiates with every week.