AI Security Initiative

AI security initiative logo


Advancing the Frontier of AI Risk Management


Artificial intelligence (AI) is advancing rapidly, and the associated risks — from cyber operations to socioeconomic disruption — are growing in scale, scope, and urgency. 

Without proper guardrails, these risks can harm individuals, businesses, and society. Risk management must stay a step ahead of AI development to ensure AI is used safely, ethically, and in ways that align with human values and legal standards. 

Based at the University of California, Berkeley, the AI Security Initiative is uniquely positioned to ensure that AI is developed safely and serves the public interest.


Featured Research

Who We Are

The AI Security Initiative is a leading center for research and development of AI risk management standards. We believe that meaningful multistakeholder engagement and actionable research are neglected in debates about AI risks. We are striving to ensure the benefits of AI are distributed equitably and the greatest harms are prevented.

Founded in 2019 at UC Berkeley’s Center for Long-Term Cybersecurity, we are a multidisciplinary research group working with world-renowned faculty. We partner with leading AI and tech policy centers across campus and beyond.

Our Expertise

  • AI Risk Management: Leading a multi-stakeholder effort to develop a risk management standards profile for general-purpose AI, updated annually
  • AI Risk Thresholds: Improving the scientific rigor and public accountability of intolerable risk thresholds for frontier AI
  • AI Risk Modeling and Threat Modeling: Developing probabilistic risk assessment and analyzing the severity and likelihood of different AI risk pathways 
  • AI Testing and Evaluation: Developing guidance for comprehensive AI testing and evaluation methods including red teaming and social impact evaluations
  • Guiding Public Sector AI: Working together with state and local governments including California and Washington to guide the responsible use of AI
  • Cyber AI Risks: Advancing understanding of how AI amplifies cybersecurity risks and what mitigation strategies are available
  • Responsible Development and Use of Generative AI: Working together with organizations to guide the responsible development and use of AI and generative AI
  • AI Risk Communication: Building on decades of literature on risk communication to advance risk communication practices for AI

What We Do

We help AI developers, policymakers, and researchers stay a step ahead in order to realize AI’s benefits — and prevent its greatest harms. 

We conduct research on how to anticipate, measure, and evaluate the risks of AI, and we develop risk management guidance for AI developers, users, researchers, and policymakers. 

We serve as a neutral convening platform, bringing together multidisciplinary experts from across sectors through workshops and events.

And we help shape AI policy and standards through participation in AI governance bodies at state, national, and international levels.

Our Impact

  • Our policy recommendations (e.g. to OSTP, NTIA, and NIST) have informed actions taken by the U.S. federal government on AI risk management
  • Our research has been highlighted at leading AI conferences including the International Association for Safe & Ethical AI Conference, FAccT, and TrustCon
  • We have supported more than two dozen fellows working in AI policy and security, many of whom have gone onto impactful careers in these fields  
  • Our workshops and convenings have brought together hundreds of multistakeholder experts on pressing questions in AI governance, security, and risk management

Our Approach

AI risk is multifaceted — and so is our approach. 

We understand that the harms from AI may be devastating and that there is a narrow window of opportunity to meaningfully address them. 

AI risk management must contend with the capabilities and limitations of AI technologies, the people who develop and use them, and the structures of power in which they operate.

Our vision is a world in which AI technologies are developed and deployed in ways that are verifiably safe, secure, and accountable to the people whose lives they impact.

Our Leadership

  • We run the UC Berkeley AI Policy Hub, which advances interdisciplinary research and education to anticipate and address AI policy opportunities
  • We serve as members of the U.S. AI Consortium at the National Institute of Standards and Technology and contribute to the development of standards for AI testing, evaluation, and transparency
  • We take part in leading international and multistakeholder AI governance deliberations such as  the EU Code of Practice Plenary for General-Purpose AI and the OECD Expert Group on AI Risk and Accountability


Contact Jessica Newman (jessica dot newman at berkeley dot edu) to explore opportunities to engage and partner with the AI Security Initiative, and subscribe to our mailing list for regular updates on new research and events.