The NIST AI Risk Management Framework is the voluntary US framework, published by the National Institute of Standards and Technology in January 2023, for managing risk in the design, development, deployment, and use of AI systems. Version 1.0 is the current canonical reference. A Generative AI Profile published in 2024 layers on top of it for AI systems that produce text, image, code, audio, or video output.
The framework is organized around four functions. Govern, which sets up the policies, accountability, and culture that make AI risk management possible. Map, which contextualizes the AI system, its purpose, and its risks. Measure, which assesses the identified risks using qualitative and quantitative methods. Manage, which prioritizes and responds to the risks based on impact. Each function has subcategories with concrete practices an organization is expected to implement.
Unlike ISO/IEC 42001, NIST AI RMF is not a certification standard. There is no auditor, no badge, no formal compliance status. It is a framework organizations adopt voluntarily, and its value is in the discipline it imposes rather than in any external recognition. That said, federal agencies, state regulators, and enterprise procurement teams increasingly reference NIST AI RMF as a baseline for what good AI risk management looks like.
In practice, organizations often map their AI Use Policy and AI vendor diligence to NIST AI RMF as a structuring framework, then pursue ISO 42001 certification as the external recognition layer on top. The two are complementary rather than competing.