As artificial intelligence reshapes industries and daily life, it is also empowering cybercriminals to execute more sophisticated and devastating attacks. Recognizing the urgent need to address this emerging threat, two leading research centers at the University of California, Berkeley — the Center for Long-Term Cybersecurity (CLTC) and Berkeley Risk and Security Lab (BRSL) — have joined forces to launch a groundbreaking initiative: “AI-Enabled Cybercrime: Exploring Risks, Building Awareness, and Guiding Policy Responses.”
The project will uncover how AI is transforming cybercrime landscapes—from supercharging phishing scams and identity theft to enabling entirely new forms of digital attacks—and to equip policymakers and industry leaders with forward-looking strategies to counter these threats.
“While we often focus on catastrophic, nation-state cyberattacks, there’s an everyday crisis unfolding: low-cost, high-impact cybercrime that affects millions of ordinary people,” says project lead Dr. Gil Baram, senior lecturer at Bar Ilan University and non-resident research scholar at UC Berkeley.“Generative AI is lowering barriers for entry, allowing cybercriminals to scale their operations and outpace traditional defenses.” Baram explains. This research effort leverages CLTC’s expertise in foresight-based scenario planning and BRSL’s cutting-edge use of war games and empirical tabletop exercises. The initiative has been scoped to unfold across nine months, with Baram and her team engaging technology hubs in Silicon Valley, Singapore, and Israel through a series of workshops and expert interviews to uncover the dynamics of AI-powered cybercrime and develop forward-looking defense strategies.
“AI is not just amplifying existing cyber threats like ransomware, phishing, and identity theft—it’s creating entirely new pathways for exploitation.”
– Dr. Gil Baram
The initiative kicks off December 17 with a scenario-based tabletop exercise at UC Berkeley. Cybersecurity professionals, academic experts, local government officials and law enforcement representatives will explore responses as generative AI tools—like those used to create hyper-realistic phishing scams—transform cybercrime and threaten individuals and organizations alike.
This workshop is supported by leading cybersecurity firm Fortinet, which had previously partnered with CLTC, the World Economic Forum Centre for Cybersecurity, and other industry collaborators on the foresight-focused scenarios Cybersecurity Futures 2030. Derek Manky, Chief Security Strategist & Global VP Threat Intelligence at Fortinet, states “AI is lowering the barrier to entry for aspiring cybercriminals, and more experienced threat actors are using it to increase the volume and velocity of attacks they deploy. Fortinet knows from experience that collaborative partnerships that engage experts across the public and private sectors can meaningfully disrupt the cybercrime ecosystem, particularly as adversaries harness new technologies. We look forward to growing this effort in 2025, working with additional partners to develop novel strategies to combat AI-enabled cybercrime.”
The initiative hopes to support follow-up workshops in Singapore (March 2025) and Tel Aviv, Israel (June 2025), culminating in a public report to be released in Summer 2025. The aim is to arm industry leaders and decision-makers with the tools they need to adapt policies and technologies to this fast-changing landscape.
“AI isn’t just changing the rules of the game—it’s creating an entirely new one,” Baram says. “By combining insights from diverse global stakeholders, we hope to guide a stronger, more adaptive response to this unprecedented challenge.”