Announcement, News / June 2025

AI Security Initiative Seeking Fall 2025 Graduate Student Researcher

Overview

The UC Berkeley Center for Long-Term Cybersecurity (CLTC) invites applications for a Graduate Student Researcher (GSR) position to work within CLTC’s AI Security Initiative for the Fall 2025 semester. This GSR appointment is for up to 50% and can include fee remission.

The accepted applicant will have the opportunity to engage with CLTC researchers and network and contribute to a hub for interdisciplinary research on the global governance and security implications of artificial intelligence. The work is interdisciplinary and applicants from all departments, including PhD, masters, and law students, are encouraged to apply. 

Responsibilities / Tasks

The GSR will primarily work on the AISI’s intolerable risk thresholds project, with opportunities to contribute to the v1.2 update of the UC Berkeley AI Risk Management Profile. The GSR will collaborate closely with other researchers at the AI Security Initiative and must be available for virtual meetings several times a week.

Main responsibilities will include: 

  • Assisting with literature reviews on methods to ensure safety and/or security of frontier models, including AI capability evaluations, AI risk management frameworks, and AI risk thresholds. 
  • Drafting content for research outputs and/or AI risk management guidance.
  • Running quality assurance (QA) checks (e.g., spelling, punctuation, typos, citations) on writing.   
  • Assisting with document formatting (including citations and references). 

Additional responsibilities depending on skills and research interests: 

  • Assisting with the development of a probabilistic graphical model (Bayesian network), using R studio. 
  • Assisting with conducting gap analyses to ensure that drafted guidance addresses key technical and governance issues in AI safety, security or other areas.
  • Application of research methods in engineering (e.g. statistical analysis), social science (e.g. survey design), or other fields as appropriate, to problems related to AI safety, security, impact assessment, or other AI risk management topics.

Minimum Qualifications 

  • Actively enrolled in a graduate-level degree program at UC Berkeley.
  • Basic understanding of AI concepts, including machine learning, AI models, and their applications.
  • Familiarity with AI risk management frameworks and methodologies, especially those related to ethical AI, security, and governance.
  • Good written and verbal communication skills.
  • Ability to participate in academic research efforts, including literature reviews, data gathering, and synthesis of complex information. 

Special Knowledge, Skills & Abilities/ Preferred Qualifications

  • Ability to translate AI and cybersecurity research into quantifiable metrics, analyze a large body of publications, and think critically about emerging technology impacts. 
  • Experience with R, SAS, Python, or a similar language, Graduate-level statistics knowledge, particularly with probabilistic graphical models such as Bayesian networks.
  • Knowledge of leading risk management and governance frameworks such as the NIST Cybersecurity Framework (CSF), NIST AI Risk Management Framework (AI RMF), NIST SP 800-37, and ISO/IEC 27001 and 42001.
  • Familiarity with emerging AI safety concerns such as deception, AI agents, and CBRN/cyber risks.
  • Previous experience in an academic research environment.

Supervisor

Jessica Newman, Director of the AI Security Initiative.

Compensation

Information on GSR salary scales can be found here. The step level for this appointment will be commensurate with experience. 

Application Process

To apply, please submit your CV and a brief cover note describing your interest in the position and any relevant experience via this Google form by Thursday, July 10th at 11:59pm (PDT).

If you have further questions, please email jessica.newman@berkeley.edu.