News / September 2021

Response to NIST AI RMF Request for Information

On September 15, 2021, a group of researchers — affiliated with UC Berkeley — with expertise on AI research and development, safety, security, policy, and ethics submitted this formal response to the National Institute of Standards and Technology (NIST), in response to the NIST AI Risk Management Framework (AI RMF) Request for Information released in July 2021.

The researchers focus on three broad categories of risks: to democracy and security, to human rights and well-being, and of global catastrophes. Although many real-world examples of risks may fit into more than one of those categories, each category also has important analytical distinctions and is independently important for ensuring that the future development of AI systems remains safe and commensurate with human priorities. The gap we aim to fill with this submission to NIST is the identification of policy strategies, institutional mechanisms, and technical interventions that speak to the intersection of these risks, with emphasis on themes that cut across the particular dangers or warnings articulated by AI theorists, computer scientists, policymakers, and stakeholder advocates.

The key general topics and recommendations include:

  • Keep focusing on and delineate the meaning of societal-scale issues, to include: risks to democracy and security; risks to human rights and wellbeing; and global catastrophic risks.
    • We appreciate that NIST has dedicated substantial attention to societal-scale issues in the AI RMF RFI, in addition to individual and group risks.
    • We recommend that the meaning of societal scale issues be expanded to include: risks to democracy and security, such as polarization, extremism, mis- and disinformation, and social manipulation; risks to human rights and wellbeing including equity, environmental, and public health risks; and global catastrophic risks, including risks to large numbers of people caused by AI accidents, misuse, or unintended impacts in both the near- and long-term.
  • Risk assessment approaches focused on intended use cases have important limitations.
    • Consideration of intended AI use-cases is valuable and necessary, but not sufficient, for identification and assessment of important AI risks.
    • We appreciate that NIST goes beyond focusing on intended use cases in the RMF RFI.
    • We recommend that the RMF include clear, usable guidance on identifying and assessing risks of AI, yielding risk management strategies that would be robust despite high uncertainty about future potential uses and misuses beyond the AI designers’ originally intended/planned uses.
  • The nascent but growing field of AI safety is providing insights about AI risks and risk management.
    • While much of the work in the field of AI safety is at an early stage, it has already yielded some general principles and tools that we expect could be useful to NIST stakeholders.
    • We recommend that the NIST Framework consider the nascent but growing field of AI safety in informing its deliberations.
  • NIST should continue to maintain awareness of progress in AI safety and other key fields, and update corresponding components of the RMF as needed.
    • The AI field has changed significantly over the last five years, and is likely to continue to change, perhaps even more dramatically.
    • We recommend that NIST maintain close relationships with researchers in key fields (including AI safety, security, and capabilities) to follow shifts across these fields and potential impact on the RMF, and that NIST update corresponding components of the Framework as needed.
  • Coordination of standards for risk identification and mitigation, to the extent possible.
    • We recommend that NIST be explicit about how and where the RMF will incorporate and coordinate with existing and future AI standards development and risk assessment.

In the full submission, the researchers expand on the above comments related to key cross-cutting general RMF RFI topics, with a focus on the aforementioned categories of risk (to democracy and security, to human rights and well-being, and global catastrophic risks). They describe each type of risk in detail, outline various subsets and examples, and highlight existing technical and policy work that speaks to them. The researchers then provide separate comments on specific RFI topics. Our recommendations in response to the specific RFI topics include the following:

  • We recommend that the RMF provide guidance on risk identification, assessment, and prioritization processes to include risks that could have high consequences for society but may seem to AI designers to be outside the typical scope of consideration for their organization, such as events that would be novel or low-probability events, or systemic risks, or expected to be outside their typical time horizon. (Recommendation for RFI Topic 1)
  • We recommend that NIST consult with a diverse set of stakeholders, including risk-sensitive groups, for input such as on definitions of key terms to better understand how the terms have been used differently by various stakeholders. (Recommendation for RFI Topic 2)
  • We recommend that NIST consider “assessment of generality” (i.e., assessment of the breadth of AI applicability/adaptability) as another important characteristic affecting trustworthiness of an AI, or perhaps as a factor affecting one or more of the AI trustworthiness characteristics NIST has already outlined. (Recommendation for RFI Topic 2)
  • We recommend that NIST consider including principles of sustainability and inclusivity. We also recommend that NIST clarify two items in the RMF RFI regarding NIST’s use of the terms “characteristics” and “principles”: 1. That the difference between principles and characteristics is made more clear, and 2. Where the RFI states that “These characteristics and principles are generally considered as contributing to the trustworthiness of AI technologies and systems, products, and services”, we recommend you clarify to what extent NIST meant “considered by the public”, or “considered by experts”, or both. (Recommendation for RFI Topic 3)
  • We recommend that NIST consider having the RMF include guidance to have risk identification processes performed by a team that is diverse, multidisciplinary, representing multiple departments of the organization, as well as including a correspondingly diverse set of stakeholders from outside the organization. (Recommendation for RFI Topic 5)
  • We recommend that the RMF include standardized templates for reporting information on AI risk factors and incidents, that AI developers could adopt voluntarily. (Recommendation for RFI Topic 5)
  • We recommend that NIST consider adding usability as an attribute of the AI RMF. (Recommendation for RFI Topic 9)
  • We recommend that NIST consider clarifying its planned procedures for making RMF updates (how often, under what conditions, decision criteria), and how it aims to balance flexibility with standard-setting authority. (Recommendation for RFI Topic 10)
  • We strongly recommend that the Framework include a comprehensive set of governance mechanisms to help organizations mitigate identified risks. These should include guidance for determining who should be responsible for implementing the Framework within each organization, ongoing monitoring and evaluation mechanisms that protect against evolving risks from continually learning AI systems, support for incident reporting, risk communication, complaint and redress mechanisms, independent auditing, and protection for whistleblowers, among other mechanisms. We also recommend that the Framework encourage organizations to consider entirely avoiding AI systems that pose unacceptable risks to rights, values, or safety. (Recommendation for RFI Topic 12)

Response to NIST AI RMF Request for Information