Announcement / March 2021

Call for Graduate Student Researchers: Global Governance and Security Implications of Artificial Intelligence

AI security initiative logo
Learn more about AISI

The UC Berkeley Center for Long-Term Cybersecurity (CLTC) invites applications for Graduate Student Researcher positions to work within the CLTC AI Security Initiative for limited-term appointments for Summer 2021. The accepted applicant(s) will have the opportunity to engage with CLTC staff and network and contribute to a growing hub for interdisciplinary research on the global governance and security implications of artificial intelligence. Opportunities will vary based upon the skills and interests of the applicant. The Initiative is interdisciplinary and applicants from all departments, including PhD, masters, and law students, are encouraged to apply.

Graduate student researchers will have the opportunity to investigate their own research question. The research projects will be largely self-directed, with guidance from others in the Initiative. The kinds of questions we are interested in include, but are not limited to, the following:

  • What are the implications of risk-based AI regulation? In what ways will emerging regulatory frameworks meaningfully reduce potential harms from AI development and deployment?
  • What is the impact of independent auditing on AI development and deployment?
  • What mechanisms are under-explored in protection against attacks to and with AI systems, including information warfare, deepfakes and other synthetic media, and adversarial attacks?
  • As increasingly advanced AI systems begin to combine language models with computer vision, in what ways will novel ethical, governance, and security issues need to be considered?

To apply, please fill out this Google form by Friday, April 2nd. In your cover letter, please specify the research question you want to focus on, the desired output (e.g. white paper, research article, etc.), and an overview of how you plan to achieve your research goals over the course of the summer. For an example of work completed by a former AI Security Initiative GSR, please see the white paper, “The Flight to Safety-Critical AI: Lessons in AI Safety from the Aviation Industry”. If you have already published research relevant to AI safety, security, ethics, or policy please also include a link or attached pdf of the work. If you have further questions, please email jessica.newman@berkeley.edu.

Apply Here