AI Security Initiative

Analyzing Global Impacts of Artificial Intelligence

 

Housed in the UC Berkeley Center for Long-Term Cybersecurity (CLTC), the AI Security Initiative is a growing hub for interdisciplinary research on the global security impacts of artificial intelligence (AI).

The rapid expansion of artificial intelligence has led to a dramatic shift in the global security landscape. For all their benefits, AI systems introduce new vulnerabilities and can yield dangerous outcomes — from the automation of cyberattacks to disinformation campaigns and new forms of warfare.

AI is expected to contribute transformative growth to the global economy, but these gains are currently poised to widen inequalities, stoke social tensions, and motivate dangerous national competition. The AI Security Initiative works across technical, institutional, and policy domains to support trustworthy development of AI systems today and into the future.

The Initiative facilitates research and dialogue to help AI practitioners and decision-makers prioritize the actions they can take today that will have an outsized impact on the future trajectory of AI security around the world.

Call for Researchers

From left: Steve Weber, Jessica Cussins Newman, Brandie Nonnecke, Kevin Kiley, Ed Chau, & Max Tegmark at the CA State Legislative Policy Briefing on AI

CLTC is pleased to announce an open call for UC Berkeley graduate student researchers to join the AI Security Initiative. Accepted applicants will have the opportunity to engage with CLTC staff and network and contribute to an emerging field of study at the intersection of artificial intelligence and cybersecurity.

Researchers will have the opportunity to investigate questions such as the following:

  • How does AI shift global power dynamics, and what are the consequences of these shifts?
  • How can AI developers or policymakers mitigate against mistakes, attacks, and misuse of AI systems?
  • How can threat modeling help organizations prepare for risks posed by AI systems?
  • How can people and organizations protect themselves against AI-enabled cyberattacks?
  • How will the convergence of AI with other consequential technologies alter the threat landscape?
  • How can quality assurance and technical standards for AI systems be implemented into product development and review cycles?
  • What are the biggest obstacles to AI-enabled threat detection?
  • What are the most important lessons from cybersecurity or other fields that can guide the responsible development and deployment of AI?

To apply, please send a CV and cover letter to cltc@berkeley.edu.

 

Research and Media

Toward AI Security Report CoverToward AI Security: Global Aspirations for a More Resilient Future

This report introduces a framework for navigating the complex landscape of AI security, which is then used to facilitate a comparative analysis of AI strategies and policies from ten countries around the world.

What the Machine Learning Value Chain Means for Geopolitics

This article introduces the idea of a machine learning value chain and offers insights on the geopolitical implications for countries searching for competitive advantage in the age of AI.

3 Reasons you should pay attention to the OECD AI principles

This Op-Ed argues that the OECD AI Principles should not be overlooked as yet another set of non-binding AI principles, but as a new global reference point of AI governance.

The new AI competition is over norms

This Op-Ed argues that a central element of AI leadership is control over the norms and
values that shape the development and use of AI around the world.

Fair, Reliable, & Safe: California Can Lead the Way on AI Policy to Ensure Benefits for All

This Op-Ed argues that the vanguard of AI policy is taking place at the local and state levels and discusses how California has positioned itself as a leader in responsible AI governance.

The World Isn’t Ready for AI to Upend the Global Economy

This article discusses how AI is significantly altering the global economy and how policymakers
can prepare and position themselves in an uneven landscape.

NIST RFI: Developing a Federal AI Standards Engagement Plan

This joint submission to the U.S. National Institute of Standards and Technology (NIST) includes
five standards ideas, key elements of AI leadership, and priorities for government engagement.

CLTC Grant Program

The CLTC grant program supports UC Berkeley faculty and graduate students to carry out research projects on the security implications of artificial intelligence.

Staff

Jessica Cussins Newman

Research Fellow

Steve Weber

Faculty Director

Ann Cleaveland

Executive Director