Jessica Newman
Director
Artificial intelligence (AI) is advancing rapidly, and the associated risks — from cyber operations to socioeconomic disruption — are growing in scale, scope, and urgency.
Without proper guardrails, these risks can harm individuals, businesses, and society. Risk management must stay a step ahead of AI development to ensure AI is used safely, ethically, and in ways that align with human values and legal standards.
Based at the University of California, Berkeley, the AI Security Initiative is uniquely positioned to ensure that AI is developed safely and serves the public interest.
The AI Security Initiative is a leading center for research and development of AI risk management standards. We believe that meaningful multistakeholder engagement and actionable research are neglected in debates about AI risks. We are striving to ensure the benefits of AI are distributed equitably and the greatest harms are prevented.
Founded in 2019 at UC Berkeley’s Center for Long-Term Cybersecurity, we are a multidisciplinary research group working with world-renowned faculty. We partner with leading AI and tech policy centers across campus and beyond.
We help AI developers, policymakers, and researchers stay a step ahead in order to realize AI’s benefits — and prevent its greatest harms.
We conduct research on how to anticipate, measure, and evaluate the risks of AI, and we develop risk management guidance for AI developers, users, researchers, and policymakers.
We serve as a neutral convening platform, bringing together multidisciplinary experts from across sectors through workshops and events.
And we help shape AI policy and standards through participation in AI governance bodies at state, national, and international levels.
AI risk is multifaceted — and so is our approach.
We understand that the harms from AI may be devastating and that there is a narrow window of opportunity to meaningfully address them.
AI risk management must contend with the capabilities and limitations of AI technologies, the people who develop and use them, and the structures of power in which they operate.
Our vision is a world in which AI technologies are developed and deployed in ways that are verifiably safe, secure, and accountable to the people whose lives they impact.
Contact Jessica Newman (jessica dot newman at berkeley dot edu) to explore opportunities to engage and partner with the AI Security Initiative, and subscribe to our mailing list for regular updates on new research and events.