Filter & Sort
Date | Type | Research Item | Topics |
---|---|---|---|
November 19, 2024 |
White Paper |
CyberCAN: Cybersecurity for Cities and NonprofitsBy: Sarah Powazek, Shannon Pierson A new CLTC report provides guidance to help government leaders in San Francisco and other cities more effectively support the digital security of local nonprofits. The report, “CyberCAN:… |
|
October 31, 2024 |
White Paper |
Resistance to Text-to-Image Generators in Creator CommunitiesText-to-image generators like DALL-E have sparked controversy among artistic creators, who are concerned about how generative artificial intelligence (Gen AI) models have been trained with copyright-protected materials. A… |
|
October 1, 2024 |
White Paper |
Cyber Resilience and Social Equity: Twin Pillars of a Sustainable Energy FutureBy: Emma Stewart, Remy Stolworthy, Virginia Wright A report published by the Center for Long-Term Cybersecurity, Cyber Resilience and Social Equity: Twin Pillars of a Sustainable Energy Future, examines the importance of cybersecurity in ensuring equitable access to energy. In an era of worsening cybersecurity threats, the paper advocates for “sustainable energy delivery systems that ensure robust defenses without compromising the goals of reducing energy poverty and ensuring energy security.” |
digital harmshomeland securityinfrastructurepolicypublic interest cybersecurity |
September 3, 2024 |
White Paper |
A Swarm Intelligence Approach to Prioritizing the CIS Controls V8.0 ImplementationBy: Hayat Abdulla Asad Cue, Thirimachos Bourlai, Mark Lupo Authored by researchers affiliated with CyberArch, a cybersecurity clinic at the University of Georgia’s Carl Vinson Institute of Government (CVIOG), this paper introduces a novel approach for ranking actions outlined in the Center for Internet Security (CIS) framework, a prioritized set of safeguards to help organizations mitigate common cyber attacks. |
cyber talent pipelinedigital harmsgovernanceMITRE Attackpublic interest cybersecurityriskusable securityvulnerable populations |
July 2, 2024 |
White Paper |
Improving the Explainability of Artificial Intelligence: The Promises and Limitations of Counterfactual ExplanationsBy: Alexander Asemota, Improving the Explainability of Artificial Intelligence: The Promises and Limitations of Counterfactual Explanations A new white paper from the Center for Long-Term Cybersecurity explores diverse approach to explainable artificial intelligence (xAI), focusing on counterfactual explanations, or CTEs. The paper, “Improving the Explainability of Artificial Intelligence: The Promises and Limitations of Counterfactual Explanations,” was authored by Alexander Asemota, a 2023-2024 AI Policy Hub Fellow. |
|
May 16, 2024 |
White Paper |
Benchmark Early and Red Team Often: A Framework for Assessing and Managing Dual-Use Hazards of AI Foundation ModelsBy: Anthony Barrett, Krystal Jackson, Jessica Newman, Nada Madkour, Evan R. Murphy, Benchmark Early and Red Team Often: A Framework for Assessing and Managing Dual-Use Hazards of AI Foundation Models Authored by Anthony M. Barrett, Krystal Jackson, Evan R. Murphy, Nada Madkour, and Jessica Newman, this report assesses two methods for evaluating the “dual-use” hazards of AI foundation models, which include large language models (LLMs) such as GPT-4, Gemini, Claude 3, Llama 3, and other general purpose AI systems. |
artificial intelligence (AI)digital harmshomeland securitymachine learning (ML)national securityrisk |
April 23, 2024 |
White Paper |
The Transaction Costs of Municipal Cyber Risk ManagementBy: Rowland Herbert-Faulkner This white paper, The Transaction Costs of Municipal Cyber Risk Management, brings to light the transaction costs associated with municipal cyber risk management, including the costs of searching for information, coordination between parties, drawing up and enforcing contracts, negotiation, inventory and monitoring, and compliance and enforcement. The paper was authored by by Rowland Herbert-Faulkner, a PhD Candidate in the Department of City and Regional Planning at UC Berkeley whose dissertation research focuses on technology governance at the municipal and regional scales. |
board governancedigital harmsgovernanceinsurancepublic interest cybersecurity |
January 23, 2024 |
White Paper |
Representing Privacy Legislation as Business RisksBy: Andrew Chong, Richmond Wong For this CLTC white paper, researchers Richmond Wong and Andrew Chong used Form 10-K documents — annual regulatory reports for investors that publicly traded companies must file with the U.S. Securities and Exchange Commission (SEC) — to analyze how nine major technology companies assess and integrate the business risks of privacy regulation like the EU’s General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and the California Privacy Rights Act (CPRA). |
CCPACPRAdigital harmsGDPRgovernanceprivacyriskrisk communication |
December 5, 2023 |
White Paper |
Cybersecurity Futures 2030: New FoundationsTo better understand how diverse forces are shaping the future of cybersecurity for governments and organizations, the Center for Long-Term Cybersecurity (CLTC), the World Economic Forum Centre for Cybersecurity, and CNA’s Institute for Public Research collaborated on “Cybersecurity Futures 2030: New Foundations,” a foresight-focused research initiative that aims to inform cybersecurity strategic plans around the globe. |
artificial intelligence (AI)cyber talent pipelinedeterrencedifferential privacydigital harmsinternet fragmentationmachine learning (ML)misinformationnational securitypolicyscenariossurveillance |
November 8, 2023 |
White Paper |
AI Risk-Management Standards Profile for General-Purpose AI Systems (GPAIS) and Foundation ModelsBy: Anthony Barrett, Jessica Newman, Brandie Nonnecke Increasingly general-purpose AI systems, such as BERT, CLIP, GPT-4, DALL-E 2, and PaLM, can provide many beneficial capabilities, but they also introduce risks of adverse events with societal-scale consequences. This document provides risk-management practices or controls for identifying, analyzing, and mitigating risks of such AI systems. The document is intended primarily for developers of these AI systems; others that can benefit from this guidance include downstream developers of end-use applications that build on a general-purpose AI system platform. This document facilitates conformity with leading AI risk management standards and frameworks, adapting and building on the generic voluntary guidance in the NIST AI RMF and ISO/IEC 23894 AI risk management standard, with a focus on the unique issues faced by developers of increasingly general-purpose AI systems. |
artificial intelligence (AI)digital harmsgovernanceNISTpolicyreinforcement learningrisk |