News / May 2022

Recommendations to NIST on the AI Risk Management Framework Initial Draft

On April 28, 2022, a group of researchers – affiliated with centers at the University of California, Berkeley – with expertise in AI research and development, safety, security, policy, and ethics submitted this formal response to the National Institute of Standards and Technology (NIST) in response to the Initial Draft of the NIST AI Risk Management Framework (AI RMF). The researchers previously submitted responses to NIST in September 2021 on the NIST AI RMF Request For Information (RFI), and in January 2022 on the AI RMF Concept Paper.

In this submission, the researchers provide in-depth comments, first regarding the topics/questions posed below by NIST in the AI RMF Initial Draft, and then on specific passages in the NIST AI RMF Initial Draft.

1. Whether the AI RMF appropriately covers and addresses AI risks, including with the right level of specificity for various use cases.
Response/Comment: Overall, we do not believe that the AI RMF sufficiently covers and addresses AI risks, especially systemic and societal risks, nor does it have sufficient specificity for various use cases. However, we believe it has the potential to do so if expanded out, and that the question about the right level of specificity will depend in part on the draft Practice Guide, which has not been released at this time.

2. Whether the AI RMF is flexible enough to serve as a continuing resource considering evolving technology and standards landscape.
Response/Comment: Our current sense is that the AI RMF would be flexible enough to serve as a continuing resource, especially with frequent updates to the Practice Guide and Profiles.

3. Whether the AI RMF enables decisions about how an organization can increase understanding of, communication about, and efforts to manage AI risks.
Response/Comment: Overall, we do believe that the AI RMF has potential to enable decisions about how an organization can increase understanding of, communication about, and efforts to manage AI risks. However, again, much will depend on the draft Practice Guide, which has not been released at this time. We would expect the Practice Guide to include directly, or include via reference, a wide array of example pitfalls, concerns, and remediations. Additional recommendations on specific methodologies within the main AI RMF document would also be very helpful.

4. Whether the functions, categories, and subcategories are complete, appropriate, and clearly stated.
Response/Comment: The functions, categories, and subcategories need greater detail to be practical for a wide variety of potential users. We provide several more specific comments addressing this question in the following section “Our comments on specific passages in the NIST AI RMF Initial Draft”, under “Page 16, Table 1, Category ID 2”; “Page 16, Table 1, Category ID 3”; “Page 16, Table 1, Category ID 4”; “Page 17, Table 2, Category ID 2, Second Subcategory”; etc.

5. Whether the AI RMF is in alignment with or leverages other frameworks and standards such as those developed or being developed by IEEE or ISO/IEC SC42.
Response/Comment: Our current sense is that the AI RMF is broadly in alignment with other frameworks and standards such as those developed or being developed by ISO/IEC JTC 1 SC42. However, further details about the AI RMF are needed, and many of the other frameworks and standards are also being continually developed, so this issue should be revisited.

6. Whether the AI RMF is in alignment with existing practices, and broader risk management practices.
Response/Comment: Overall, we do believe that the AI RMF is in alignment with many current best practices, including risk management practices. We provide more specific comments in the following.

7. What might be missing from the AI RMF.
Response/Comment 7A: We believe that something missing from the Initial Draft of the AI RMF is clearer discussion of potential for systemic or even catastrophic impacts to individuals and society. We agree with the statements on p. 6 of the Initial Draft that examples of potential harms from AI systems include systemic risks such as “large scale harms to the financial system or global supply chain”, and long term risks as follows: “Some AI risks … may be latent at present but may increase in the long term as AI systems evolve.” However, the Initial Draft does not seem to have a statement clearly corresponding to the following passage from the AI RMF Concept Paper: “…Managing AI risks presents unique challenges. An example is the evaluation of effects from AI systems that are characterized as being long-term, low probability, systemic, and high impact. Tackling scenarios that can represent costly outcomes or catastrophic risks to society should consider: an emphasis on managing the aggregate risks from low probability, high consequence effects of AI systems, and the need to ensure the alignment of ever more powerful advanced AI systems.” (NIST 2021a, p.1) In the wake of significant advances of AI systems such as BERT, CLIP, GPT-3, DALL-E 2, and PaLM, it is vitally important for the AI RMF to prepare teams for addressing the possibility of both transformative benefits and catastrophic risks of these increasingly multi-purpose or general-purpose AI that can serve as AI platforms underpinning many end-use applications. Such advanced AI models often have qualitatively distinct properties compared to narrower models, such as the potential to be applied to many sectors at once, and emergent properties that can provide unexpected capabilities but also unexpected risks of adverse events. These models could present corresponding catastrophic risks to society, e.g. of correlated robustness failures across multiple high-stakes application domains (Bommasani et al. 2021 pp. 115-116).

We recommend that NIST more clearly adapt or insert statements from that AI RMF Concept Paper passage into the AI RMF Section 4 (Framing Risk), perhaps specifically in Section 4.2 (Challenges for AI Risk Management). We believe that it would be in the interests of all stakeholders, including AI developers, for the AI RMF to clearly aim to constructively prompt early, proactive consideration of these risk management issues. As Jihao Chen of Parity AI and Richard Mallah of the Future of Life Institute both noted in the NIST AI RMF Workshop 2 (NIST 2022), identifying and addressing a risk earlier instead of later helps to maximize benefits and minimize costs of managing that risk. Moreover, there is precedent for NIST framework guidance prompting risk assessment considering potentially catastrophic impacts: the NIST Cybersecurity Framework guidance on risk assessment points to NIST SP 800-53 RA-3, which in turn references NIST SP 800-30; the impact assessment scale in Table H-3 of SP 800-30 includes criteria for rating an expected impact as a “catastrophic adverse effect” to individuals, organizations, or a society (NIST 2012, NIST 2018, NIST 2020).

Suggested Change 7A: In Section 4.2 or elsewhere, we recommend adding statements that more clearly correspond to the following passage (or perhaps just insert the entire passage) from p.1 of the AI RMF Concept Paper: “…Managing AI risks presents unique challenges. An example is the evaluation of effects from AI systems that are characterized as being long-term, low probability, systemic, and high impact. Tackling scenarios that can represent costly outcomes or catastrophic risks to society should consider: an emphasis on managing the aggregate risks from low probability, high consequence effects of AI systems, and the need to ensure the alignment of ever more powerful advanced AI systems.”

Response/Comment 7B: In addition, the current framing of AI risks and characteristics of trustworthy AI described in Figure 3 and elsewhere may be missing important details and nuance. It seems unclear why in Figure 3 and elsewhere, NIST only selected the three guiding principles of fairness, accountability, and transparency and not others. For example, the NIST Taxonomy of AI Risk (NIST 2021b, p. 8) notes that the OECD principles include “traceability to human values” and EU principles include “human agency and oversight” and “environmental and societal well-being.”

More broadly, the list of known risks and characteristics of trustworthy AI as shown in Figure 3 is not comprehensive. There are many additional characteristics that may inform the realization of trustworthy AI. Although these cannot all reasonably be added here, it should be acknowledged that more detail and nuance is available elsewhere. (See, e.g., discussion in CLTC forthcoming.) It would be valuable for NIST to provide guidance on how organizations can incorporate additional guiding principles and/or characteristics as part of their use of the AI RMF. Lastly, the split between technical and socio-technical risks is problematic. We provide further discussion and recommendations about this point in the following comment under “Page 7, Lines 35-37; Page 8, Figure 3 and elsewhere in Section 5.”

Suggested Change 7B: In Section 5 or elsewhere, we recommend providing an explanation of why NIST only selected the three guiding principles of fairness, accountability, and transparency and not others. We also recommend providing guidance on how organizations can incorporate additional guiding principles and/or characteristics as part of their use of the AI RMF.

8. Whether the soon to be published draft companion document citing AI risk management practices is useful as a complementary resource and what practices or standards should be added.
Response/Comment: The draft Practice Guide has not been released at this time. It would be useful if it included, among other things, guidance on how to identify possible harms and risks, as well as reference lists of examples of unintended harms that AI systems can cause or have caused.

9. Others?
Response/Comment: Our current sense is that for many AI RMF users, Implementation Tiers will not have great value. Moreover, omitting Implementation Tiers may help avoid confusion about differences between NIST Implementation Tiers and EU AI Act risk tiers.

Additional comments on specific passages in the NIST AI RMF Initial Draft can be found in the full submission.

Download the full comment