August 23, 2022

AI Policy Hub Welcomes Inaugural Cohort of Graduate Student Researchers

Six graduate students from across the UC Berkeley campus have been selected to join the AI Policy Hub, a newly established interdisciplinary center focused on translating scientific research into governance and policy frameworks to shape the future of artificial intelligence (AI).

The UC Berkeley AI Policy Hub is run by the AI Security Initiative, part of the Center for Long-Term Cybersecurity at the UC Berkeley School of Information, and the University of California’s CITRIS Policy Lab, part of the Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS).

“We are thrilled to welcome the first cohort of researchers to the AI Policy Hub,” says Jessica Newman, Director of the AI Security Initiative. “These students are tackling critical emerging challenges related to AI-based systems, from the impact of ‘recommender algorithms’ in social media to digital surveillance in China. We expect that their research will help provide guidance to policymakers and decision-makers, locally and globally.”

Each of the students in the cohort will conduct independent research on a specialized topic related to AI, and will then use their findings to develop policy recommendations for realizing the potential benefits of AI, while managing harms and reducing the risk of devastating outcomes, including accidents, abuses, and systemic threats. The researchers will share findings through symposia, policy briefings, papers, and other resources, to inform policymakers and other AI decision-makers so they can act with foresight.

“The graduate students represent a diversity of backgrounds and disciplinary expertise,” says Brandie Nonnecke, Director of the CITRIS Policy Lab. “They will play a crucial role in the AI Policy Hub’s goal of generating interdisciplinary, research-based approaches to advancing responsible AI policy.”

Following are brief profiles of the six students:

Alexander Asemota
Alexander Asemota

Alexander Asemota, a PhD Student in Statistics in the UC Berkeley Division of Computing, Data Science, and Society, conducts research on explainability in machine learning. His research will focus on improving “counterfactual explanations,” a promising method for understanding decisions made by AI-based systems.

Micah Carroll
Micah Carroll


Micah Carroll
, a PhD student studying artificial intelligence in the UC Berkeley Department of Electrical Engineering and Computer Science, will conduct research into the effects of the “recommender algorithms” used by social media platforms on users and society.

 

Angela Jin
Angela Jin

Angela Jin, a PhD student in the UC Berkeley Department of Electrical Engineering and Computer Sciences, will use experiments and qualitative user studies to create tools that defense lawyers can use to test the reliability of evidentiary statistical software, the outputs of which are increasingly used as evidence to prosecute the criminally accused.

 

Zoe Kahn
Zoe Kahn

Zoe Kahn, a PhD student in the UC Berkeley School of Information, explores how AI/ML systems may result in unanticipated dynamics, including harms, to people and society. Kahn will use qualitative methods to understand the impacts of algorithms in two contexts: in Togo, to determine the allocation of cash aid to people living in extreme poverty; and in the San Francisco Bay Area, to determine the allocation of housing and services to people experiencing homelessness.

 

Zhouyan Liu
Zhouyan Liu

Zhouyan Liu, a Master of Public Policy student at the Goldman School of Public Policy, will research the origins, mechanics, and effects of China’s digital surveillance system, and its consequences to privacy rights, state capacity, and state-society relations. Liu’s work will focus specifically on the role AI plays in this system, as well as appropriate governance mechanisms.

 

Cedric Whitney
Cedric Whitney

Cedric Whitney, a PhD student in the School of Information, will research “algorithmic disgorgement,” a subset of “machine unlearning” by which the impact of an individual’s data is removed from a specific machine-learned algorithm. The project aims to help clarify how machine unlearning can be effectively wielded in both compliance efforts and prospective legislation to protect consumers and increase incentives for responsible AI development.

 

The UC Berkeley AI Policy Hub launched March 2022 with seed funding from the Future of Life Institute (FLI), a nonprofit organization that seeks to steer the development and use of transformative technology towards benefitting life and away from large-scale risks.

For more information, please contact AI Policy Hub Co-Directors, Jessica Newman (jessica.newman@berkeley.edu) and Brandie Nonnecke (nonnecke@berkeley.edu).