Opinion

USA - NIST proposes a framework for AI Risk Management

Published Date
Feb 8 2023
The U.S. National Institute of Standards and Technology (NIST) of the U.S. Department of Commerce published its AI Risk Management Framework (AI RMF) on 26 January 2023, a guidance document for organizations designing, developing, deploying or using artificial intelligence (AI) systems. 

The AI RMF offers a flexible voluntary framework to manage the risks of AI technologies. It promotes a change in organisational culture that prioritises the identification and management of AI risks and potential impacts on individuals and society, and encourages organizations to integrate and incorporate AI risk management into broader organisation’s enterprise risk management strategies and processes.

What is an AI system?

The AI RMF defines an AI system as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments… with varying levels of autonomy’. This definition is aligned with the OECD Recommendations on AI and with the international standard ISO/IEC 22989:2022 “Information technology - Artificial intelligence”. 

Unique or increased risks posed by AI systems

The AI RMF attempts to offer a comprehensive approach for managing AI-specific risks. NIST notes that these risks are unique and differ from the risks of traditional software or information-based systems, which means that traditional risk management approaches developed for IT systems would not apply easily. Examples of the risks include:

  • privacy risks due to enhanced data aggregation capability for AI systems;
  • the data used for building an AI system might not be a true or appropriate representation of the context or intended use of the AI system; the datasets also may include harmful biases or there may be other data quality issues;
  • without proper controls, AI systems can amplify or worsen inequitable or undesirable outcomes for individuals, groups or communities;
  • functionality and trustworthiness of AI systems that are trained on data that can change significantly and unexpectedly over time can be affected in ways that are hard to understand;
  • intentional or unintentional changes during training AI systems may fundamentally alter their performance;
  • difficulty in performing regular AI-based software testing, or determining what to test, since AI systems are not subject to the same controls as traditional software code development; and
  • security risks associated with third-party AI technologies, transfer learning, and off-label use where AI systems may be trained for decision-making outside an organisation’s security controls.

NIST also notes that existing guidance on privacy and cybersecurity risk management, while generally applicable to AI systems, does not comprehensively address many AI system risks, such as harmful bias, risks related to generative AI, security concerns related to evasion, model extraction, membership inference, availability, or other machine learning attacks.

Risk identification and management

The AI RMF is split into two parts. The first part discusses how organizations can frame AI risks and sets out the typical features of trustworthy AI systems (e.g. that they are reliable, secure, accountable, transparent, explainable, privacy enhanced and fair). 

The AI RMF addresses in detail each of these features and proposes a flexible system to address new risks as they emerge., with all parties and AI stakeholders (such as developers and users of AI systems)  taking responsibility for managing risk in the AI systems they develop, deploy or use. 

AI RMF Core

The second part of AI RMF (AI RMF Core) outlines four key functions (Govern, Map, Measure and Manage) to help organizations address the risks of AI systems in practice at any stage of the AI lifecycle. Each function is divided into specific actions and outcomes, and the guidance explains how implementation of each of the functions can help organizations to prioritise, manage and regularly monitor AI risks, document the process and increase transparency and accountability over time.

A companion voluntary AI RMF playbook was published alongside the framework to help organizations with navigating the framework. It addresses in detail each of the core functions and describes the use-case profiles (or typical scenarios), such as large language models, cloud-based services or hiring, that provide useful examples on how to implement the AI RMF in practice.

Further plans

NIST aims to update the framework regularly in response to feedback from the AI community and is open to suggestions on improvements on the AI RMF playbook at any time. Any comments submitted by the end of February 2023 will be included in the updated version of the playbook, planned for release in spring 2023. NIST also plans to open a Trustworthy and Responsible AI Resource Centre to assist organizations with implementing the AI RMF.

Read the press release here, the AI RMF here and the AI RMF Playbook here (click on each section of the diagram to access specific RMF core functions).

Content Disclaimer

This content was originally published by Allen & Overy before the A&O Shearman merger

Related capabilities