AI Security Initiative
Analyzing Global Impacts of Artificial Intelligence
Housed in the UC Berkeley Center for Long-Term Cybersecurity (CLTC), the AI Security Initiative is a growing hub for interdisciplinary research on the global security impacts of artificial intelligence (AI).
The rapid expansion of artificial intelligence has led to a dramatic shift in the global security landscape. For all their benefits, AI systems introduce new vulnerabilities and can yield dangerous outcomes — from the automation of cyberattacks to disinformation campaigns and new forms of warfare.
AI is expected to contribute transformative growth to the global economy, but these gains are currently poised to widen inequalities, stoke social tensions, and motivate dangerous national competition. The AI Security Initiative works across technical, institutional, and policy domains to support trustworthy development of AI systems today and into the future.
The Initiative facilitates research and dialogue to help AI practitioners and decision-makers prioritize the actions they can take today that will have an outsized impact on the future trajectory of AI security around the world.
Call for Researchers
CLTC is pleased to announce an open call for UC Berkeley graduate student researchers to join the AI Security Initiative. Accepted applicants will have the opportunity to engage with CLTC staff and network and contribute to an emerging field of study at the intersection of artificial intelligence and cybersecurity.
Researchers will have the opportunity to investigate questions such as the following:
- How does AI shift global power dynamics, and what are the consequences of these shifts?
- How can AI developers or policymakers mitigate against mistakes, attacks, and misuse of AI systems?
- How can threat modeling help organizations prepare for risks posed by AI systems?
- How can people and organizations protect themselves against AI-enabled cyberattacks?
- How will the convergence of AI with other consequential technologies alter the threat landscape?
- How can quality assurance and technical standards for AI systems be implemented into product development and review cycles?
- What are the biggest obstacles to AI-enabled threat detection?
- What are the most important lessons from cybersecurity or other fields that can guide the responsible development and deployment of AI?
To apply, please send a CV and cover letter to firstname.lastname@example.org.