Artificial intelligence is used widely across the ten-campus University of California (UC), for applications ranging from medical imaging and modeling health risks to recruiting employees. Yet AI poses a wide range of potential risks, including racial bias and other discrimination.
To help the UC System prepare for and mitigate the potential harms of artificial intelligence, UC President Michael V. Drake and former UC President Janet Napolitano launched the UC Presidential Working Group on Artificial Intelligence to shape a set of responsible principles to promote the safe and ethical development, use, procurement, and monitoring of AI across the university.
The working group has published a final report that explores current and future applications of AI across the university, and provides recommendations for how to operationalize the UC Responsible AI Principles, focused on four high-risk application areas: health, human resources, policing, and student experience.
Jessica Newman, program lead for the Center for Long-Term Cybersecurity’s Artificial Intelligence Security Initiative (AISI), served as co-chair on the Presidential Working Group’s Health Subcommittee. “It is easy to focus on the cutting-edge AI research being carried out across the UC campuses, but AI technologies are also already being implemented throughout UC operations in high-risk domains including healthcare, policing, HR, and student experience,” Newman says.
“Last year, UC Health provided over eight million outpatient visits,” Newman says. “Current uses of AI, such as hospitalization risk modeling, have the potential to provide more proactive care to those in need, but could also incorporate bias and systemically and unfairly disadvantage certain groups. UC needs a strategy to ensure oversight and accountability for such AI uses, and the recommendations outlined in this report — developed by dozens of multidisciplinary experts over twelve months — provide a meaningful starting point from which to do so.”
The report concludes with four overarching recommendations to help guide UC’s strategy for determining whether and how to responsibly implement AI in its operations, including:
-
Institutionalize the UC Responsible AI Principles in procurement, development, implementation, and monitoring practices;
- Establish campus-level councils and support coordination across UC that will further the principles and guidance developed by this Working Group;
- Develop an AI risk and impact assessment strategy; and
- Document AI-enabled technologies in a public database.
Newman’s leadership on the UC Presidential Working Group is part of a broader effort by the AISI to promote the adoption of principles and guidelines for responsible use of AI. Newman and fellow researchers are working to shape a new AI framework under development by the National Institute of Standards and Technology (NIST), and together with the CITRIS Policy Lab, the AISI is supporting the State of California in establishing guidelines for the responsible use of AI by state government agencies.
The UC is one of the first universities to adopt a set of principles and recommendations to guide the responsible use of AI in its operations. “The principles have potential to become a model for other universities,” Newman says. “As institutions turn to AI to enhance the efficiency of their services, they must ensure they have robust guidelines and oversight mechanisms in place.”