Artificial Intelligence Risk Management Framework From US NIST Aims to Improve Trust in AI

Author photo: Patrick Arnold
ByPatrick Arnold
Category:
Company and Product News

The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has released its Artificial Intelligence Risk Management Framework (AI RMF), a guidance document for voluntary use by organizations designing, developing, deploying or using AI systems to help manage the many risks of Artificial Intelligence Risk ManagementAI technologies. The AI RMF follows a direction from Congress for NIST to develop the framework and was produced in close collaboration with the private and public sectors.

The voluntary framework will help develop and deploy AI technology in ways that enable the United States and other nations and organizations to enhance the trustworthiness of AI while managing the risks. Compared with traditional software, AI poses several different risks. AI systems are trained on data that can change over time, sometimes significantly and unexpectedly, affecting the systems in ways that can be difficult to understand. These systems are also influenced by societal dynamics and human behavior. AI risks can emerge from the complex interplay of these technical and societal factors.

The framework equips organizations to think about AI and risk differently. It promotes a change in institutional culture, encouraging organizations to approach AI with a new perspective — including how to think about, communicate, measure and monitor AI risks and its potential positive and negative impacts.

The framework is part of NIST’s larger effort to cultivate trust in AI technologies. It can help organizations of any industry or size to enhance their AI risk management approaches and will hopefully drive development of a new set of best practices and standards.

The AI RMF is divided into two parts. The first part discusses how organizations can frame the risks related to AI and outlines the characteristics of trustworthy AI systems. The second part, the core of the framework, describes four specific functions — govern, map, measure and manage — to help organizations address the risks of AI systems in practice. These functions can be applied in context-specific use cases and at any stages of the AI life cycle.

NIST has been developing the AI RMF for 18 months. The document reflects about 400 sets of formal comments NIST received from more than 240 different organizations on draft versions of the framework. In addition, NIST plans to launch a Trustworthy and Responsible AI Resource Center to help organizations put the AI RMF 1.0 into practice.

The framework is part of NIST’s growing portfolio of AI-related work that includes fundamental and applied research along with a focus on measurement and evaluation, technical standards, and contributions to AI policy. 

Engage with ARC Advisory Group

Representative End User Clients
Representative Automation Clients
Representative Software Clients