AI Security Initiative

AI security initiative logo


Analyzing Global Security and Governance Implications of Artificial Intelligence

Who We Are

Housed in the UC Berkeley Center for Long-Term Cybersecurity (CLTC), the AI Security Initiative is a growing hub for interdisciplinary research on the global security implications of AI.

The rapid expansion of artificial intelligence has led to a dramatic shift in the global security landscape. For all their benefits, AI systems introduce new vulnerabilities and can yield dangerous outcomes — from the automation of cyberattacks to disinformation campaigns and new forms of warfare.

AI is expected to contribute transformative growth to the global economy, but these gains are currently poised to widen inequities, stoke social tensions, and motivate dangerous national competition. The AI Security Initiative works across technical, institutional, and policy domains to support trustworthy development of AI systems today and into the future. We facilitate research and dialogue to help AI practitioners and decision-makers prioritize the actions they can take today that will have an outsized impact on the future trajectory of AI security around the world.

The Initiative’s long-term goal is to help communities around the world thrive with safe and responsible automation and machine intelligence. Download a PDF overview of the AI Security Initiative.

What We Do

The AI Security Initiative conducts independent research and engages with technology leaders and policymakers at state, national, and international levels, leveraging UC Berkeley’s premiere reputation and our SF Bay Area location near Silicon Valley. Our activities include conducting and funding technical and policy research, then translating that research into practice. We convene international stakeholders, hold policy briefings, publish white papers and op-eds, and engage with leading partner organizations in AI safety, governance, and ethics.

Our research agenda focuses on three key challenges: vulnerabilities, misuse, and power.

Vulnerabilities, Misuse, and Power Graphic

Featured Publications


Research and Media


Open Problems in Technical AI Governance

Securing the Future of GenAI: Policy and Technology

Can We Manage the Risks of General-Purpose AI Systems?

Response to NTIA Request for Comments on AI Accountability Policy

How Should Companies Communicate the Risks of Large Language Models to Users?

 

 

 

 

 

 

 

 


Evaluating the Social Impact of Generative AI Systems in Systems and Society

Five Takeaways from the NIST AI Risk Management Framework

Actionable Guidance for High-Consequence AI Risk Management

University of California Presidential Working Group on AI Final Report

NIST’s AI Risk Management Framework Should Address Key Societal-Scale Risks

 

 

 

 

 

 

 

 

 

 


AI & Cybersecurity: Balancing Innovation, Execution & Risk

Now is the Time for Transatlantic Cooperation on Artificial Intelligence
Explainability Won’t Save AI
AI at the Borderlands
Government AI Readiness Index 2020 Report
Government AI Readiness Index 2020

 

 

 

 

 

 

 

 

AI Principles in Context
AI Principles in Context
Pandemic is showing us we need safe and ethical AI more than ever
Pandemic is showing us we need safe and ethical AI more than ever
Artificial Intelligence: Ethics In Practice
Artificial Intelligence: Ethics In Practice
3 reasons you should pay attention to the OECD AI principles
Towards an Inclusive Future in AI
3 reasons you should pay attention to the OECD AI principles
3 reasons you should pay attention to the OECD AI principles

 

 

 

 

 

 

 

The World Isn’t Ready for AI to Upend the Global Economy
The World Isn’t Ready for AI to Upend the Global Economy