White Paper / May 2020

Decision Points in AI Governance

Cover of CLTC report, "Decision Points in AI Governance"
Download the Report

The Center for Long-Term Cybersecurity (CLTC) has issued a new report that takes an in-depth look at recent efforts to translate artificial intelligence (AI) principles into practice. The report, Decision Points in AI Governance, authored by CLTC Research Fellow and AI Security Initiative (AISI) Program Lead Jessica Cussins Newman, provides an overview of 35 efforts already under way to implement AI principles, ranging from tools and frameworks to standards and initiatives that can be applied at different stages of the AI development pipeline.

The paper also highlights three recent efforts as case studies: Microsoft’s AETHER Committee, which the company established to help evaluate normative questions related to AI; OpenAI’s “staged release” of a powerful language processing model, which challenged traditional software publishing norms to promote research and dialogue about possible harms; and the Organisation for Economic Co-operation and Development’s AI Policy Observatory, which launched earlier this year as part of a groundbreaking international effort to establish shared guidelines around AI.

“The question of how to operationalize AI principles marks a critical juncture for AI stakeholders across sectors,” Cussins Newman wrote in an introduction to the report. “The case studies detailed in this report provide analysis of recent, consequential initiatives intended to translate AI principles into practice. Each case provides a meaningful example with lessons for other stakeholders hoping to develop and deploy trustworthy AI technologies.”

In the past few years, dozens of groups from industry, government, and civil society have published “AI principles,” frameworks designed to establish goals for safety, accountability, and other goals in support of the responsible advancement of AI. Decision Points in AI Governance provides examples of the kinds of efforts now underway for AI research and applications and serves as a guide for other AI stakeholders — including companies, communities, and national governments — facing decisions about how to safely and responsibly develop and deploy AI around the world.

“It has become difficult for AI stakeholders to ignore the many AI principles and strategies in existence,” Cussins Newman wrote. “Even companies and organizations that have not defined their own principles may be expected or required to adhere to those adopted by governments. This growing ‘universality’ will lead to increased pressure to establish methods to ensure AI principles and strategies are realized . . . early efforts — those that fill governance gaps to establish new standards and best practices — are likely to be especially influential.” Among the other notable findings detailed in the report:

  • Large multinational companies have an outsized impact on trends in AI development and deployment, but have not universally adopted new practices or oversight committees to help ensure their technologies will be beneficial.
  • Executive-level support is critical in shaping an organization’s commitment to responsible AI development. Engagement with employees and experts — and integration with the company’s legal team — are also essential.
  • Shifting publication norms will require researchers and organizations to face decisions about how to responsibly publish AI research that could be misused or cause unintentional harm.
  • Companies can make use of multiple synergistic accountability measures, including documentation efforts, discussion of potentially harmful uses and impacts in research papers, and facilitating communication prior to and following the release of new AI models.
  • International coordination and cooperation on AI begins with a common understanding of what is at stake and what outcomes are desired for the future. That shared language now exists in the Organisation for Economic Co-operation and Development (OECD) AI Principles, which are being leveraged to support partnerships, multilateral agreements, and the global deployment of AI systems.
  • Despite challenges in achieving international cooperation, governments remain motivated to support global governance frameworks for AI.

“Decisions made today about how to operationalize AI principles at scale will have major implications for decades to come,” Cussins Newman wrote. “AI stakeholders have an opportunity to learn from existing efforts and to take concrete steps to ensure we build a better future.”

The report’s cover image depicts the creation of “Artifact 1,” by Sougwen Chung, a New York-based artist and researcher. Artefact 1 (2019) explores artistic co-creation and is the outcome of an improvisational drawing collaboration with a custom robotic unit linked to a recurrent neural net trained on Ms. Chung’s drawings. The contrasting colors of lines denote those marks made by the machine and by Ms. Chung’s own hand.

Download the Full Report