Research Library

Since 2015, CLTC has directly funded more than 160 projects by UC Berkeley students, faculty & affiliates on original cybersecurity research topics, in addition to developing our own white papers, publications, blogs, policy analysis and recommendations, and open-source curricula and toolkits.

Filter & Sort



Research Item


January 23, 2024

White Paper

cover image of the report, featuring symbols like dollar signs, padlocks, and the SEC

Representing Privacy Legislation as Business Risks

By: Andrew Chong, Richmond Wong

For this CLTC white paper, researchers Richmond Wong and Andrew Chong used Form 10-K documents — annual regulatory reports for investors that publicly traded companies must file with the U.S. Securities and Exchange Commission (SEC) — to analyze how nine major technology companies assess and integrate the business risks of privacy regulation like the EU’s General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and the California Privacy Rights Act (CPRA).

December 5, 2023

White Paper

a network of boxes padlocks and other shapes

Cybersecurity Futures 2030: New Foundations

To better understand how diverse forces are shaping the future of cybersecurity for governments and organizations, the Center for Long-Term Cybersecurity (CLTC), the World Economic Forum Centre for Cybersecurity, and CNA’s Institute for Public Research collaborated on “Cybersecurity Futures 2030: New Foundations,” a foresight-focused research initiative that aims to inform cybersecurity strategic plans around the globe.

November 8, 2023

White Paper

a steam-driven tool

AI Risk-Management Standards Profile for General-Purpose AI Systems (GPAIS) and Foundation Models

By: Anthony Barrett, Jessica Newman, Brandie Nonnecke

Increasingly general-purpose AI systems, such as BERT, CLIP, GPT-4, DALL-E 2, and PaLM, can provide many beneficial capabilities, but they also introduce risks of adverse events with societal-scale consequences. This document provides risk-management practices or controls for identifying, analyzing, and mitigating risks of such AI systems. The document is intended primarily for developers of these AI systems; others that can benefit from this guidance include downstream developers of end-use applications that build on a general-purpose AI system platform. This document facilitates conformity with leading AI risk management standards and frameworks, adapting and building on the generic voluntary guidance in the NIST AI RMF and ISO/IEC 23894 AI risk management standard, with a focus on the unique issues faced by developers of increasingly general-purpose AI systems.

September 27, 2023

Policy Brief

AI security initiative logo

Policy Brief on AI Risk Management Standards for General-Purpose AI Systems (GPAIS) and Foundation Models

By: Anthony Barrett, Jessica Newman, Brandie Nonnecke

UC Berkeley researchers are leading an effort to create an AI risk-management standards profile for general-purpose AI systems (GPAIS), foundation models, and generative AI, such as cutting-edge large language models.

September 13, 2023

White Paper

report cover showing students walking through a hall

A Comparative Study of Interdisciplinary Cybersecurity Education

By: Lisa Ho, Sahar Rabiei, Drake White, A Comparative Study of Interdisciplinary Cybersecurity Education

Authored by Lisa Ho and researchers from the UC Berkeley School of Information, this report examines how different universities approach the challenge of teaching cybersecurity through an interdisciplinary lens, with a goal to guide other educational institutions as they develop and create their cybersecurity programs.

August 7, 2023

White Paper

a series of health monitors shown side-by-side

A Template for Voluntary Corporate Reporting on Data Governance, Cybersecurity, and AI

By: Jordan Famularo, A Template for Voluntary Corporate Reporting on Data Governance, Cybersecurity, and AI

Corporations are increasingly called upon to disclose their practices around technology, including how they manage data, cybersecurity, and artificial intelligence. Yet no clear standard prescribes what such reporting…