AI Security Initiative

AI security initiative logo


Analyzing Global Security and Governance Implications of Artificial Intelligence

Who We Are

Housed in the UC Berkeley Center for Long-Term Cybersecurity (CLTC), the AI Security Initiative is a growing hub for interdisciplinary research on the global security implications of AI.

The rapid expansion of artificial intelligence has led to a dramatic shift in the global security landscape. For all their benefits, AI systems introduce new vulnerabilities and can yield dangerous outcomes — from the automation of cyberattacks to disinformation campaigns and new forms of warfare.

AI is expected to contribute transformative growth to the global economy, but these gains are currently poised to widen inequities, stoke social tensions, and motivate dangerous national competition. The AI Security Initiative works across technical, institutional, and policy domains to support trustworthy development of AI systems today and into the future. We facilitate research and dialogue to help AI practitioners and decision-makers prioritize the actions they can take today that will have an outsized impact on the future trajectory of AI security around the world.

The Initiative’s long-term goal is to help communities around the world thrive with safe and responsible automation and machine intelligence. Download a PDF overview of the AI Security Initiative(opens in a new tab).

What We Do

The AI Security Initiative conducts independent research and engages with technology leaders and policymakers at state, national, and international levels, leveraging UC Berkeley’s premiere reputation and our SF Bay Area location near Silicon Valley. Our activities include conducting and funding technical and policy research, then translating that research into practice. We convene international stakeholders, hold policy briefings, publish white papers and op-eds, and engage with leading partner organizations in AI safety, governance, and ethics.

Our research agenda focuses on three key challenges: vulnerabilities, misuse, and power.

Vulnerabilities, Misuse, and Power Graphic

Featured Publications


Research and Media


Responsible AI in the Public Sector(opens in a new tab)

International AI Safety Report(opens in a new tab)

Responsible Use of Generative AI Playbook(opens in a new tab)

Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices(opens in a new tab)

Safety in Artificial Intelligence: Challenges and Opportunities for the U.S. National Labs and Beyond(opens in a new tab)

 

 

 

 

 

 

 

 


Open Problems in Technical AI Governance(opens in a new tab)

Securing the Future of GenAI: Policy and Technology(opens in a new tab)

Can We Manage the Risks of General-Purpose AI Systems?(opens in a new tab)

Response to NTIA Request for Comments on AI Accountability Policy(opens in a new tab)

How Should Companies Communicate the Risks of Large Language Models to Users?(opens in a new tab)

 

 

 

 

 

 

 

 


Evaluating the Social Impact of Generative AI Systems in Systems and Society(opens in a new tab)

Five Takeaways from the NIST AI Risk Management Framework(opens in a new tab)

Actionable Guidance for High-Consequence AI Risk Management(opens in a new tab)

University of California Presidential Working Group on AI Final Report(opens in a new tab)

NIST’s AI Risk Management Framework Should Address Key Societal-Scale Risks(opens in a new tab)

 

 

 

 

 

 

 

 

 


AI & Cybersecurity: Balancing Innovation, Execution & Risk(opens in a new tab)

Now is the Time for Transatlantic Cooperation on Artificial Intelligence(opens in a new tab)
Explainability Won’t Save AI(opens in a new tab)
Government AI Readiness Index 2020 Report
Government AI Readiness Index 2020(opens in a new tab)
AI Principles in Context
AI Principles in Context(opens in a new tab)

 

 

 

 

 

 

 

 

Pandemic is showing us we need safe and ethical AI more than ever
Pandemic is showing us we need safe and ethical AI more than ever(opens in a new tab)
Artificial Intelligence: Ethics In Practice
Artificial Intelligence: Ethics In Practice
3 reasons you should pay attention to the OECD AI principles
Towards an Inclusive Future in AI(opens in a new tab)
3 reasons you should pay attention to the OECD AI principles
3 reasons you should pay attention to the OECD AI principles(opens in a new tab)
The World Isn’t Ready for AI to Upend the Global Economy
The World Isn’t Ready for AI to Upend the Global Economy(opens in a new tab)