Analyzing Global Impacts of Artificial Intelligence

Housed in the UC Berkeley Center for Long-Term Cybersecurity (CLTC), the AI Security Initiative is a growing hub for interdisciplinary research on the global security impacts of artificial intelligence (AI).

The rapid expansion of artificial intelligence has led to a dramatic shift in the global security landscape. For all their benefits, AI systems introduce new vulnerabilities and can yield dangerous outcomes — from the automation of cyberattacks to disinformation campaigns and new forms of warfare.

AI is expected to contribute transformative growth to the global economy, but these gains are currently poised to widen inequalities, stoke social tensions, and motivate dangerous national competition. The AI Security Initiative works across technical, institutional, and policy domains to support trustworthy development of AI systems today and into the future.

The Initiative facilitates research and dialogue to help AI practitioners and decision-makers prioritize the actions they can take today that will have an outsized impact on the future trajectory of AI security around the world. Download a PDF overview of the AI Security Initiative.

Research and Media

Toward AI Security: Global Aspirations for a More Resilient Future

This report introduces a framework for navigating the complex landscape of AI security, which is then used to facilitate a comparative analysis of AI strategies and policies from ten countries around the world.

Towards an Inclusive Future in AI. A Global Participatory Process

This paper highlights interpretations and proposals for AI inclusion that resulted from 11 workshops in 8 countries.

What the Machine Learning Value Chain Means for Geopolitics

This article introduces the idea of a machine learning value chain and offers insights on the geopolitical implications for countries searching for competitive advantage in the age of AI.

3 Reasons you should pay attention to the OECD AI principles

This Op-Ed argues that the OECD AI Principles should not be overlooked as yet another set of non-binding AI principles, but as a new global reference point of AI governance.

The new AI competition is over norms

This Op-Ed argues that a central element of AI leadership is control over the norms and
values that shape the development and use of AI around the world.

Fair, Reliable, & Safe: California Can Lead the Way on AI Policy to Ensure Benefits for All

This Op-Ed argues that the vanguard of AI policy is taking place at the local and state levels and discusses how California has positioned itself as a leader in responsible AI governance.

The World Isn’t Ready for AI to Upend the Global Economy

This article discusses how AI is significantly altering the global economy and how policymakers
can prepare and position themselves in an uneven landscape.

NIST RFI: Developing a Federal AI Standards Engagement Plan

This joint submission to the U.S. National Institute of Standards and Technology (NIST) includes five standards ideas, key elements of AI leadership, and priorities for government engagement.

CLTC Grant Program

The CLTC grant program supports UC Berkeley faculty and graduate students to carry out research projects on the security implications of artificial intelligence.

Staff

Program Lead

Jessica Cussins Newman

Faculty Director

Steve Weber

Executive Director

Ann Cleaveland

Graduate Researcher

Renata Barreto-Montenegro

Postdoctoral Scholar

N. Benjamin Erichson

Graduate Researcher

Jigyasa Sharma