AI Policy Hub

AI Policy Hub

Advancing interdisciplinary research to anticipate and address AI policy opportunities


The AI Policy Hub is an interdisciplinary initiative training forward-thinking researchers to develop effective governance and policy frameworks to guide artificial intelligence, today and into the future.

Research conducted through the AI Policy Hub helps policymakers and other AI decision-makers act with foresight in rapidly changing social and technological environments.

Our mission is to cultivate an interdisciplinary research community to anticipate and address policy opportunities for safe and beneficial AI. 

Our vision is a future in which AI technologies do not exacerbate division, harm, violence, and inequity, but instead foster human connection and societal well-being.

We are housed at the AI Security Initiative, part of the University of California, Berkeley’s Center for Long-Term Cybersecurity, and the University of California’s CITRIS Policy Lab, part of the Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS).

We also collaborate with other UC Berkeley departments and centers that are contributing work on AI governance and policy, including Berkeley’s Division of Computing, Data Science, and Society (CDSS) and its affiliated School of Information, the Center for Human-Compatible Artificial Intelligence (CHAI), the Berkeley Center for Law & Technology (BCLT), the College of Engineering, and the Goldman School of Public Policy.

Fellowship Overview

A key goal of the AI Policy Hub is to strengthen interdisciplinary research approaches to AI policy while expanding inclusion of diverse perspectives. We encourage UC Berkeley students actively enrolled in graduate degree programs (Master’s and PhD students) from all departments and disciplines to apply. 

Participants of the AI Policy Hub will have the opportunity to conduct innovative, interdisciplinary research and make meaningful contributions to the AI policy landscape, helping to reduce the harmful effects and amplify the benefits of artificial intelligence.

Program participants will receive faculty and staff mentorship, access to world-renowned experts and hands-on training sessions, connections with policymakers and other decision-makers, and opportunities to share their work at a public symposium. The AI Policy Hub will provide participants with practical training for AI policy career paths in the federal and state government, academia, think tanks, and industry. Selected participants will receive up to 50% Graduate Student Researcher (GSR) positions for a full academic year including Fall ‘23 and Spring ‘24 semesters, with tuition and fee remission for both semesters. 

Research Priorities

Our research is focused on forward-looking, consequential, and societal-scale implications of AI. For this year’s cohort, we are especially interested in projects that aim to mitigate harmful societal implications of generative AI and increasingly general purpose AI systems such as large language models.

Current topics of interest include, but are not limited to:

  • Standards, frameworks, benchmarks, or policies for the responsible development, deployment, or use of generative AI (e.g.  NIST AI Risk Management Framework, EU conformity assessments)
  • Technical/governance processes for the validity, reliability, robustness, fairness, explainability, and transparency of generative AI systems
  • Monitoring of AI accidents, incidents, and impacts
  • Innovative legislative/regulatory models for AI or interpretations of existing laws and oversight mechanisms in light of AI technologies
  • Responsible development and design of generative AI (e.g., data scraping, data protection, labor rights, safety and accountability mechanisms)
  • Responsible AI publication practices and policies (e.g. licensing, APIs, open-source, limited release, intellectual property rights, etc.)
  • Implications of generative AI for knowledge production, culture, democracy, and the economy
  • Monopolization and control vs. increasing access to AI development, infrastructure, and capabilities
  • Abuses of AI power (e.g., by governments, industry, or users resulting in: censorship, surveillance, human rights abuses, addictive or harmful design choices, dark patterns, toxic or harmful content, or disinformation)
  • Weaponization of AI (e.g., lethal autonomous weapon systems,  AI cyber weapons)
  • Identification and mitigation of AI-enabled harms to civil and political rights (e.g., in education, voting, policing, housing, employment, and healthcare)
  • Geopolitical dynamics and opportunities for international coordination

Student Expectations

During the one-year program, students are expected to:

  • Conduct innovative research that addresses one or more of the topics of interest
  • Publish research through a white paper and/or journal article
  • Translate research into at least one policy deliverable (e.g., op-ed, policy memo)
  • Present their work at the annual symposium
  • Participate in weekly team meetings and bi-weekly individual meetings
  • Participate in the workshops and speaker series events
  • Support fellow members of their cohort by providing feedback

Application Process

To apply, students must submit the form found here by Friday, April 28 at 11:59 PM (PDT). In addition to a short list of questions about you and your project, the form will require you to upload your CV and a document (2 pages max) describing your proposed project and its expected policy impacts.

Finalists will be invited to interview with AI Policy Hub directors in May. Decisions will be made in June and the selected students will be notified via email.

Meet the Fall ’22 – Spring ’23 Cohort!

Alexander Asemota

Alexander Asemota

PhD Student, Statistics, Division of Computing, Data Science, and Society

Alex Asemota is a third year PhD student in the statistics department advised by Giles Hooker. His research focuses on explainability in machine learning, and currently he is developing counterfactual methods that are useful for practitioners in industry. A graduate of Howard University, Alex was awarded a Chancellor’s Fellowship during the first two years of his PhD training at UC Berkeley.

Research Focus: Development of realistic metrics for counterfactual explanations in AI.

Micah Carroll

Micah Carroll

PhD Student, Electrical Engineering and Computer Sciences
GitHub | @MicahCarroll

Micah Carroll is an Artificial Intelligence PhD student at UC Berkeley advised by Anca Dragan and Stuart Russell. Originally from Italy, Micah graduated with a Bachelor’s in Statistics from Berkeley in 2019. His research interests lie in human-AI systems: in particular the effects of social media on users and society, and making AIs better at complementing and collaborating with humans.

Research Focus:  Identification of manipulation incentives in recommender systems that maximize long-term engagement

Angela Jin

Angela Jin

PhD Student, Electrical Engineering and Computer Sciences
Profile | @angelacjin

Angela Jin is a second year Ph.D. student at UC Berkeley advised by Rediet Abebe. Previously, she was at Cornell University, where she received her B.S. degree in Computer Science in 2021. Her research interests lie at the intersection of human-computer interaction and machine learning, with a focus on bridging research and practice to build computational tools for scrutinizing algorithmic systems. Through her work, Angela strives to improve equity and access to opportunity for marginalized communities.

Research Focus: Design of sociotechnical systems to help defense attorneys adversarially test the reliability of evidentiary statistical software in the U.S. criminal legal system.

Zoe Kahn

Zoe Kahn

PhD Student, School of Information
LinkedIn | @zoebkahn

Zoe Kahn’s research explores how AI/ML systems may result in unanticipated dynamics, including harms to people and society. She uses qualitative methods to understand the perspectives and experiences of impacted communities; she then leverages storytelling to influence the design of technical systems and the policies that surround its use. Zoe has conducted fieldwork in rural communities in the United States, worked on issues of homelessness in the Bay Area, and is currently working on a project that uses data-intensive methods to allocate humanitarian aid to individuals experiencing extreme poverty in Togo. 

Research Focus: Development of empirically grounded stories from Togo and the Bay Area to help position policymakers and technologists to better account for the situated experiences, practices, and perspectives of impacted communities.

Zhouyan Liu

Zhouyan Liu

MPP Student, Goldman School of Public Policy

Zhouyan Liu graduated from Peking University and was an investigative journalist for Beijing-based weekly news magazine Sanlian Lifeweek for four years, covering technology and politics. Zhouyan also worked part-time or interned in ByteDance (TikTok)’s global public policy team, the California Office of Digital Innovation and Cyber Policy Center at Stanford University. At UC Berkeley, Zhouyan is an MPP candidate at the Goldman School of Public Policy. His research interests include empirical studies on China’s technology policy, digital surveillance and privacy.

Research Focus: Analysis of China’s digital surveillance system and consequences to privacy rights, state capacity, and state-society relations.

Cedric Whitney

Cedric Whitney

PhD Student, School of Information

Cedric Deslandes Whitney is a 3rd year PhD student at Berkeley’s School of Information, advised by Professors Jenna Burrell and Deirdre Mulligan. He is an NSF Graduate Research Fellow, and his background is in leading the deployment of federated machine learning infrastructures in healthcare. His research focuses on using mixed methods to tackle questions of AI governance, including previous work at the FTC on algorithmic disgorgement and at IBM on the right to be forgotten in AI systems.

Research Focus: Exploration of how algorithmic disgorgement (machine unlearning) can be effectively wielded in both compliance efforts and prospective legislation.


If you are a UC Berkeley student with inquiries about the application, or a faculty member or researcher in the field interested in collaboration or providing student mentorship, please contact Jessica Newman at For media inquiries, please contact Charles Kapelke at Interested in supporting our work philanthropically? Shanti Corrigan at can gladly facilitate introductions across our team of experts and clarify the impact gifts of all sizes can make to advance our mission.