Housed in the UC Berkeley Center for Long-Term Cybersecurity (CLTC), the AI Security Initiative is a growing hub for interdisciplinary research on the global security implications of AI.
The rapid expansion of artificial intelligence has led to a dramatic shift in the global security landscape. For all their benefits, AI systems introduce new vulnerabilities and can yield dangerous outcomes — from the automation of cyberattacks to disinformation campaigns and new forms of warfare.
AI is expected to contribute transformative growth to the global economy, but these gains are currently poised to widen inequities, stoke social tensions, and motivate dangerous national competition. The AI Security Initiative works across technical, institutional, and policy domains to support trustworthy development of AI systems today and into the future. We facilitate research and dialogue to help AI practitioners and decision-makers prioritize the actions they can take today that will have an outsized impact on the future trajectory of AI security around the world.
The Initiative’s long-term goal is to help communities around the world thrive with safe and responsible automation and machine intelligence. Download a PDF overview of the AI Security Initiative.
The AI Security Initiative conducts independent research and engages with technology leaders and policymakers at state, national, and international levels, leveraging UC Berkeley’s premiere reputation and our SF Bay Area location near Silicon Valley. Our activities include conducting and funding technical and policy research, then translating that research into practice. We convene international stakeholders, hold policy briefings, publish white papers and op-eds, and engage with leading partner organizations in AI safety, governance, and ethics.
Our research agenda focuses on the key decision points that will have the greatest impact on the future trajectory of AI security, including decisions about how AI systems are designed, bought, and deployed. These decisions will affect everything from AI standards and norms to global power dynamics and the changing nature of warfare. Our research addresses three key challenges: vulnerabilities, misuse, and power.