Guidance for the Development of AI Risk and Impact Assessments

Report Cover
Download the Report

A new report from the Center for Long-Term Cybersecurity provides a set of recommendations to help governments and other organizations evaluate the potential risks and harms associated with new artificial intelligence (AI) technologies.

The paper, Guidance for the Development of AI Risk and Impact Assessments, by Louis Au Yeung, a recent Master of Public Policy graduate from the Goldman School of Public Policy at UC Berkeley, focuses on “AI risk and impact assessments,” formalized, structured assessments used to characterize risks arising from the use of AI systems, and to identify proportionate risk mitigation measures.

“These assessments may be used by both public and private entities hoping to develop and deploy trustworthy AI systems, and are broadly considered a promising tool for AI governance and accountability,” Au Yeung wrote. “Ensuring that AI systems are safe and trustworthy is critical to increasing people’s confidence and harnessing the potential benefits of these technologies…. Risk and impact assessments provide a structured approach for assessing the risks of specific AI systems, differentiating them based on their riskiness, and adopting mitigation measures that are proportionate to the risks.”

For his research, Au Yeung, a graduate student researcher with CLTC’s Artificial Intelligence Security Initiative (AISI), conducted a comparative analysis of AI risk and impact assessments from five regions around the world: Canada, New Zealand, Germany, the European Union, and San Francisco, California. The report compares how these different assessment models approach key questions related to the safety of AI systems, including what impacts they could have on human rights and the environment, and how the range of risks should be managed.

The paper specifically focuses on current efforts underway to develop an AI risk management framework at the National Institute of Standards and Technology (NIST), which has been tasked by the United States Congress to develop a voluntary AI risk management framework that organizations can use to promote trustworthy AI development and use. The paper looks at risk management frameworks for cybersecurity and privacy that NIST has developed in the past, and provides suggestions about novel considerations associated with an AI risk framework, which may not perfectly map onto previous NIST frameworks.

Based on interviews and desktop research, the paper provides recommendations to help NIST and other interested entities develop AI risk and impact assessments that are effective in safeguarding the wider interests of society. As examples of these recommendations:

  • Certain risk mitigation measures are emphasized across all surveyed frameworks and should be considered essential as a starting point. These include human oversight, exter­nal review and engagement, documentation, testing and mitigation of bias, alerting those affected by an AI system of its use, and regular monitoring and evaluation.
  • In addition to assessing impacts on safety and rights, it is important to account for impacts on inclusiveness and sustainability in order to protect the wider interests of society and ensure that marginalized communities are not left behind.
  • Individuals and communities affected by the use of AI systems should be included in the process of designing risk and impact assessments to help co-construct the criteria fea­tured in the framework.
  • Risk and impact assessments should include banning the use of specific AI systems that pres­ent unacceptable risks, to ensure that fundamental values and safety are not compromised.
  • Periodic risk and impact reassessments should be required to ensure that continuous learning AI systems meet the standards required after they have undergone notable changes.
  • Risk and impact assessments should be tied to procurement and purchase decisions to incentivize the use of voluntary frameworks.

“The widespread use of AI risk and impact assessments will help to ensure we can gauge the risks of AI systems as they are developed and deployed in society, and that we are informed enough to take appropriate steps to mitigate potential harms,” Au Yeung wrote. “In turn, this will help promote public confidence in AI and enable us to enjoy the potential benefits of AI systems.”

Download the full paper (PDF)

Download a brief version of the paper (PDF)