Analyzing Global Impacts of Artificial Intelligence

Who We Are

Housed in the UC Berkeley Center for Long-Term Cybersecurity (CLTC), the AI Security Initiative is a growing hub for interdisciplinary research on the global security impacts of AI.

The rapid expansion of artificial intelligence has led to a dramatic shift in the global security landscape. For all their benefits, AI systems introduce new vulnerabilities and can yield dangerous outcomes — from the automation of cyberattacks to disinformation campaigns and new forms of warfare.

AI is expected to contribute transformative growth to the global economy, but these gains are currently poised to widen inequalities, stoke social tensions, and motivate dangerous national competition. The AI Security Initiative works across technical, institutional, and policy domains to support trustworthy development of AI systems today and into the future. We facilitate research and dialogue to help AI practitioners and decision-makers prioritize the actions they can take today that will have an outsized impact on the future trajectory of AI security around the world.

The Initiative’s long-term goal is to help communities around the world thrive with safe and responsible automation and machine intelligence. Download a PDF overview of the AI Security Initiative.

What We Do

The AI Security Initiative conducts independent research and engages with technology leaders and policymakers at state, national, and international levels, leveraging UC Berkeley’s premiere reputation and our SF Bay Area location near Silicon Valley. Our activities include conducting and funding technical and policy research, then translating that research into practice. We convene international stakeholders, hold policy briefings, publish white papers and op-eds, and engage with leading partner organizations in AI safety, governance, and ethics.

Our research agenda focuses on the key decision points that will have the greatest impact on the future trajectory of AI security, including decisions about how AI systems are designed, bought, and deployed. These decisions will affect everything from AI standards and norms to global power dynamics and the changing nature of warfare. Our research addresses three key challenges: vulnerabilities, misuse, and power.

Vulnerabilities, Misuse, and Power Graphic

Click to expand
 

Research and Media

Government AI Readiness Index 2020 Report
Government AI Readiness Index 2020
The Flight to Safety-Critical AI report cover
The Flight to Safety-Critical AI: Lessons in AI Safety from the Aviation Industry
AI Principles in Context
AI Principles in Context

 

 

 

 

 

 

 

 

 

Pandemic is showing us we need safe and ethical AI more than ever
Pandemic is showing us we need safe and ethical AI more than ever
Decision Points in AI Governance: Three Case Studies Explore Efforts to Operationalize AI Principles
Decision Points in AI Governance: Three Case Studies Explore Efforts to Operationalize AI Principles
Artificial Intelligence: Ethics In Practice
Artificial Intelligence: Ethics In Practice

 

 

 

 

 

 

 

 

 

 

3 reasons you should pay attention to the OECD AI principles
3 reasons you should pay attention to the OECD AI principles
The new AI competition is over norms
The new AI competition is over norms
Fair, Reliable, and Safe: California Can Lead the Way on AI Policy to Ensure Benefits for All
Fair, Reliable, and Safe: California Can Lead the Way on AI Policy to Ensure Benefits for All
The World Isn’t Ready for AI to Upend the Global Economy
The World Isn’t Ready for AI to Upend the Global Economy

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Events and Announcements

October 27, 2020: AI Race(s) to the Bottom? Consequences of Competitive AI Development Across Industries

AISI and AI policy experts will discuss when and where AI “races to the bottom” might be more or less harmful, and the surprising ways that specific industries are approaching AI development more cautiously and cooperatively. Learn more.

July 2020: AISI, CITRIS Policy Lab Collaboration with California Department of Technology

AISI, in partnership with the CITRIS Policy Lab, launched a year-long collaboration with the California Department of Technology to conduct an analysis of AI-enabled tools in select state departments and develop statewide policy recommendations to inform the procurement, development, implementation, and monitoring of such tools in the public sector. Learn more.

February 2020: AISI Speaker Seminar – “Veridical Data Science” featuring Professor Bin Yu

In this seminar, Professor Yu presented her latest work focusing on a predictability, computability, and stability (PCS) framework, which aims to provide responsible, reliable, reproducible, and transparent results across the entire data science life cycle. Learn more.

November 2019: “Human Compatible: AI and the Problem of Control” with Professor Stuart Russell

AISI and the Center for Human-Compatible Artificial Intelligence (CHAI) co-presented a book talk featuring Stuart Russell, author of Human Compatible: Artificial Intelligence and the Problem of Control. Learn more.

Staff

Jessica Newman

Program Lead

Steve Weber

Faculty Director

Ann Cleaveland

Executive Director

Renata Barreto-Montenegro

Graduate Researcher

N. Benjamin Erichson

Postdoctoral Scholar

Will Hunt

Graduate Researcher

Jigyasa Sharma

Graduate Researcher

Advisory Board

Stuart Russell

Professor of Computer Science and Engineering, UC Berkeley; Honorary Fellow, Wadham College, Oxford

Brandie Nonnecke

Founding Director, CITRIS Policy Lab; Co-Director, CITRIS Tech for Social Good Program, UC Berkeley and UC Davis

Philip Reiner

Executive Director, Technology for Global Security

Allan Dafoe

Director of the Centre for the Governance of AI at the Future of Humanity Institute at Oxford University