The UC Berkeley AI Policy Hub is now accepting applications for its
inaugural Fall 2022 – Spring 2023 cohort
APPLY HERE
Applications due by:
Tuesday, April 26 at 11:59 PM (PDT)
What are the benefits of the program to participants?
Participants of the AI Policy Hub will have the opportunity to conduct innovative, interdisciplinary research and make meaningful contributions to the AI policy landscape, helping to reduce the harmful effects and amplify the benefits of artificial intelligence.
Program participants will receive faculty and staff mentorship, access to world-renowned experts and training sessions, connections with policymakers and other decision-makers, and opportunities to share their work at a public symposium. The AI Policy Hub will provide participants with practical training for AI policy career paths in the federal and state government, academia, think tanks, and industry. Selected participants will receive up to 50% Graduate Student Researcher (GSR) positions for a full academic year including Fall ‘22 and Spring ‘23 semesters, with tuition and fee remission for both semesters.
Who should apply?
A key goal of the AI Policy Hub is to strengthen interdisciplinary research approaches to AI policy while expanding inclusion of diverse perspectives, which we believe is necessary to support safe and beneficial AI into the future. We encourage UC Berkeley students actively enrolled in graduate degree programs (Master’s and PhD students) from all departments and disciplines to apply.
What kinds of projects will be supported?
Our research is focused on forward-looking, consequential, and societal-scale implications of AI.
Current topics of interest include, but are not limited to:
- Monopolization and control of AI development, infrastructure, and capabilities
- Government abuses and misuses of AI power (e.g., censorship, surveillance, and human rights abuses)
- AI-enabled persuasion and manipulation (e.g., recommender systems, dark patterns, computational propaganda)
- Weaponization of AI (e.g., lethal autonomous weapon systems and vulnerabilities to critical infrastructure)
- Identification and mitigation of AI-enabled harms to civil and political rights (e.g., in education, voting, policing, housing, employment, and healthcare)
- Geopolitical dynamics and international coordination
- AI and disaster preparedness
- Technical/governance processes, including standards for the quality, reliability, robustness, and explainability of AI systems
- Monitoring of AI accidents, incidents, and issues
- Models of AI documentation and transparency
- Innovative legislative/regulatory models for AI
What are the expectations of participants?
During the one-year program, students are expected to
- Conduct innovative research that addresses one or more of the topics of interest
- Publish research through a white paper and/or journal article
- Translate research into at least one policy deliverable (e.g., op-ed, policy brief)
- Present their work at the annual symposium
- Participate in weekly team meetings and the speaker series events
- Support fellow members of their cohort by providing feedback
What is the application process?
To apply, students must submit the form found here by Tuesday, April 26 at 11:59 PM (PDT). To preview all of the questions, you can view a PDF of the form here. In addition to a short list of questions about you and your project, the form will require you to upload your CV and a document (2 pages max) describing your proposed project and its expected policy impacts.
Your narrative should include the following:
- A description of the research need(s) and/or problem(s) addressed, including:
- How your project addresses forward-looking, consequential, and societal-scale implications of AI
- How your project will anticipate and address policy opportunities for safe and beneficial AI
- Expected policy impacts of your project
- Research question(s) and methodology
- List of deliverables
- Timeline for conducting research, including any relevant conferences or other fora where work can be presented, including deadlines if available.
Finalists will be invited to interview with AI Policy Hub directors in May. Decisions will be made on or before mid June and the four selected students will be notified via email.
What is the AI Policy Hub?
The AI Policy Hub is an interdisciplinary initiative training forward-thinking researchers to develop effective governance and policy frameworks to guide artificial intelligence, today and into the future.
Research conducted through the AI Policy Hub helps policymakers and other AI decision-makers act with foresight in rapidly changing social and technological environments.
Our mission is to cultivate an interdisciplinary research community to anticipate and address policy opportunities for safe and beneficial AI.
Our vision is a future in which AI technologies do not exacerbate division, harm, violence, and inequity, but instead foster human connection and societal well-being.
We are housed at the AI Security Initiative, part of the University of California, Berkeley’s Center for Long-Term Cybersecurity, and the University of California’s CITRIS Policy Lab, part of the Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS).
We also collaborate with other UC Berkeley departments and centers that are contributing work on AI governance and policy, including Berkeley’s Division of Computing, Data Science, and Society (CDSS) and its affiliated School of Information, the Center for Human-Compatible Artificial Intelligence (CHAI), the Berkeley Center for Law & Technology (BCLT), the College of Engineering, and the Goldman School of Public Policy.
For more information, please see our website.
Questions?
If you have any questions about the application process, please contact Jessica Newman at jessica.newman@berkeley.edu.
APPLY HERE
Applications due by: Tuesday, April 26 at 11:59 PM (PDT)