Tag: AI

May 25, 2022

Can Documentation Improve Accountability for Artificial Intelligence?

REGISTER HERE Numerous AI documentation processes and practices have been developed in recent years, with goals including improving transparency, safety, fairness, and accountability for the development and uses of AI systems. Well known AI documentation standards include Google’s Model Cards, Microsoft’s Datasheets for Datasets, IBM’s FactSheets, and more recently Meta’s…

April 4, 2022

AI Policy Hub Now Accepting Applications

The UC Berkeley AI Policy Hub is now accepting applications for its inaugural Fall 2022 – Spring 2023 cohort APPLY HERE Applications due by: Tuesday, April 26 at 11:59 PM (PDT)   What are the benefits of the program to participants? Participants of the AI Policy Hub will have the…

February 8, 2022

New CLTC White Paper Proposes “Reward Reports” for Reinforcement Learning Systems

“Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems,” a new report by a team of researchers affiliated with the UC Berkeley Center for Long-Term Cybersecurity’s Artificial Intelligence Security Initiative (AISI), examines potential benefits and challenges related to reinforcement learning, and provides recommendations to help policymakers ensure that RL-based systems are deployed safely and responsibly.

June 3, 2021

The New Horizon for Data Rights: New Perspectives on Privacy, Security, and the Public Good

A CLTC panel focused on shifting the paradigm around data and privacy. Emerging technologies call for new policies and business models that increase the value of the data they generate while also preserving privacy and security. Internet of things (IoT) devices, digital assistants, ubiquitous sensors, and augmented/virtual reality (AR/VR) environments…

January 14, 2021

CLTC Research Exchange, Day 3: Long-Term Security Implications of AI/ML Systems

  On December 10, the Center for Long-Term Cybersecurity hosted the third event in our 2020 Research Exchange, a series of three virtual conferences that showcased CLTC-funded researchers working across a wide spectrum of cybersecurity-related topics. The December event, themed “Long-Term Security Implications of AI/ML Systems.” featured talks from a…

November 3, 2020

AI Race(s) to the Bottom? A Panel Discussion

Countries and corporations around the world are vying for leadership in AI development and use, prompting widespread discussions of an “AI arms race” or “race to the bottom” in AI safety. But the competitive development of AI will take place across multiple industries and among very different sets of actors,…

May 5, 2020

New CLTC Report: “Decision Points in AI Governance”

The Center for Long-Term Cybersecurity (CLTC) has issued a new report that takes an in-depth look at recent efforts to translate artificial intelligence (AI) principles into practice. The report, “Decision Points in AI Governance,” authored by CLTC Research Fellow and AI Security Initiative (AISI) Program Lead Jessica Cussins Newman, provides an overview of 35 efforts already under way to implement AI principles, ranging from tools and frameworks to standards and initiatives that can be applied at different stages of the AI development pipeline.

April 10, 2020

New Paper: “Artificial Intelligence Ethics in Practice”

In a recent paper, “Artificial Intelligence Ethics in Practice,” published in ;login:, the USENIX Magazine, Jessica Cussins Newman, Research Fellow at CLTC and Program Director for the AI Security Initiative, together with Rajvardhan Oak, a graduate researcher at CLTC, described some of the key ethical challenges associated with artificial intelligence.

February 5, 2020

“What, So What, Now What?”: Adversarial Machine Learning

    CLTC has launched a new series of “explainer videos” to break down complex cybersecurity-related topics for a lay audience. The first of these videos focuses on “adversarial machine learning,” when AI systems can be deceived (by attackers or “adversaries”) into making incorrect assessments. An adversarial attack might entail…