February 3, 2016

CLTC Announces $900,000 in Inaugural Research Grants

Categories:
CLTC Research, News

How can organizations better detect spear-phishing cyberattacks? How could neural signals be used as a method of online authentication? How effective are tactics such as financial account closures and asset seizures in deterring cyber criminals? What types of defenses could help protect at-risk activists and NGOs from state-level surveillance?

These are just a few of the questions that will be considered by researchers funded by the UC Berkeley Center for Long-Term Cybersecurity (CLTC), which has announced it has allocated more than $900,000 across 22 teams of research to support work to be carried out in the 2016 calendar year.

“It’s an honor for us to be in a position to help Berkeley researchers advance and extend their cybersecurity work,” says Professor Steve Weber, Faculty Director for the CLTC. “The breadth and ambition of the research projects is stunning. The scope of the work shows some of the ways in which the cybersecurity agenda is evolving.”

The grants are the first to have been given by the CLTC, which was established in 2015 through generous funding from the Hewlett Foundation. Housed in the School of Information (I School), the CLTC serves as a hub for industry, academia, policy, and practitioners, with research and programs focused on a future-oriented conceptualization of cybersecurity—what it could imply and mean for human beings, machines, and the societies that will depend on both.

Winning grant proposals were chosen from 50 submitted proposals through a selection process that began in Fall 2015. All of the grantees are UC Berkeley-affiliated researchers, though some projects have partnerships with external institutions, including the Machine Intelligence Research Institute and Carnegie Mellon University. Three of the projects were jointly funded with the UC Berkeley Center for Technology, Society & Policy, a multi-disciplinary research center focused on emergent social and policy issues of technology.

 

Project Descriptions

Below is a list of the grantees, including the title of the project, the name of the lead researcher and partners, and a summary description.

Cybercrime Science: Understanding Cybercriminal Networks and the Effect of Disruption

Lead: Sadia Afroz, Research Scientist, International Computer Science Institute

As technology evolves, abuse and cybercrime evolve with it. Cybercriminals today abuse and monetize every aspect of technology. However, understanding how criminals make profit from online abuse and the effective ways of disrupting criminal efforts is still ad hoc, often based off of anecdotal evidence, specific to a particular cybercrime and accomplished primarily through analysis of limited structured metadata and painstaking manual analysis. The key challenge is to automate this process, since this labor intensive manual approach does not scale. The researcher proposes to build and evaluate a generalizable and scalable framework for automatically analyzing online crime. The framework will examine cybercrime as a community-based activity, analyze how information flows between different communities of cybercriminal networks, automatically discover the role of the communities and identify the cost-effective method for disrupting these networks.

 

The Internet’s Challenge to the State

Lead: Vinod K. Aggarwal, Professor, Department of Political Science, and Director, Berkeley Asia Pacific Economic Cooperation (APEC) Study Center

Partners: Andrew Reddie, PhD candidate, Department of Political Science; Claire Tam, PhD candidate, Department of Political Science

The Internet is the latest in a long line of technologies promising to connect ever-increasing numbers of people. Despite obvious benefits, however, its potentially disruptive consequences for commerce, daily life, and governance are innumerable. Various actors including civil society, NGOs, international organizations, terrorist groups, and hacker collectives are now able to take part in politics on, inside, and of the Web. In lights of these challenges, this project seeks to better understand the political challenges posed to States by the Internet. Specifically, the study considers 1) how threats emanating from non-state actors change the development of technology; 2) the locations of vulnerability for the state; and 3) how the Internet might change in the process of “securitization.” The findings of the study has immediate policy implications for governments as well as broader consequences for theorizing in International Relations around the role of non-state actors in international affairs, new spaces for governance, and the future of conflict.

 

Trust, Community, and the Production of Cybersecurity Professionals

Leads: Coye Cheshire, Professor, School of Information; Ashwin J. Mathew, Visiting Scholar, School of Information, and Internet Infrastructure Researcher, Packet Clearing House

There is a global shortfall in the number of qualified cybersecurity professionals required to fill critical roles in governments, industry, and society. Numerous education programs currently attempt to address this shortfall. However, the practice of cybersecurity requires more than just skills that can be learned in a classroom. We argue that the everyday practices of cybersecurity professionals depend upon coordination and collaboration with peers, enabled by trust relationships crossing corporate and state boundaries. We will conduct ethnographic research to examine how individuals learn the practice of network security and become trusted members of network security practitioner communities. Additionally, we will review current thinking in the pedagogy of network security. Our research will offer actionable recommendations towards: (1) improving institutions and policies that enable trust relationships amongst network security professionals to support more effective practices, and (2) integrating understandings of coordination and collaboration in the practice of network security into network security education.

 

Security and Privacy of Biosensing at Scale

Leads: John Chuang, Professor, School of Information; Tapan Parikh, Professor, School of Information

Next-generation ubiquitous biosensors will allow us to continuously monitor a wide range of physiological signals, from which many inferences can be drawn — our identity, our activities, our mental and emotional states, memories and thoughts, as well as predispositions to diseases and behaviors. Novel biosensing applications and business models will raise new security and privacy challenges that are not yet anticipated nor fully understood. We will probe how people interpret (or misinterpret) the meaning of biosignals in different contexts, to shed light on how and when biosignals might become sensitive. We will investigate the feasibility of user authentication using neural signals captured using novel methods, and the possibility of user re-identification from anonymized brainwave signals. We will interrogate ubiquitous biosensing technologies from an ethical, law, and policy perspective. By studying these different facets of biosensing security and privacy, we hope to uncover, understand, and address the security challenges when these technologies are deployed at scale.

 

Cybersecurity and Corporate Governance

Lead: Steven Davidoff Solomon, Professor of Law and Faculty Co-Director, Berkeley Center for Law, Business and the Economy

Partners: Adam Sterling, Executive Director, Berkeley Center for Law, Business and the Economy

Cybersecurity is a major risk area for the private sector.  Corporate directors are concerned with how they can protect themselves and their companies against cyber attacks and the potential liability associated with such attacks.  This project will explore cybersecurity as a corporate governance issue; it will identify challenges corporate boards are likely to face over the next decade and the tools and resources available to assist them.  Our research will address several key questions: (1) what are the potential legal liabilities of directors and managers in connection with cyber attacks?; (2) what best practices can corporate boards employ to protect themselves and their companies? ; and (3) are there gaps in cyber protection that corporate leaders may be overlooking?  To answer these questions, we will work with industry leaders to conduct a survey of current corporate cybersecurity practices and determine how they may or may not satisfy the fiduciary duties of corporate directors. This survey also will explore gaps in cyber protection and actions that corporate leaders may be failing to take to protect their companies and avoid liability.

 

Unpacking Cybersecurity “Information Sharing” for an Uncertain Future

Leads: Jim Dempsey, Executive Director, Berkeley Center for Law & Technology; Elaine Sedenberg, PhD candidate, School of Information

Partners: Nick Weaver, Senior Scientist, International Computer Science Institute

For years, the phrase “information sharing” has been used in cybersecurity policy discussions without much attention to what is to be shared, and also without reference to sharing mechanisms already in place. We will unpack this overused—but under-defined—term and will seek to bring granularity to the understanding of information sharing initiatives. This project includes an inventory and analysis of information currently exchanged or contemplated under new legislation, the associated efficacy, cost/risk/benefit tradeoffs, and emerging future information sharing needs. The team brings legal, policy, and technical expertise together, along with quantitative and qualitative methodologies to provide ready-to-implement recommendations for policymakers, researchers, and industry stakeholders.

 

The Security Behavior Observatory

Lead: Serge Egelman, Senior Researcher, International Computer Science Institute

Partners: Alessandro Acquisti, Professor of Information Technology and Public Policy, Heinz College, Carnegie Mellon University; Lorrie Cranor, Professor of Computer Science and Engineering & Public Policy, Carnegie Mellon University; Nicolas Christin, Assistant Research Professor, Electrical and Computer Engineering, Carnegie Mellon University; Rahul Telang, Professor of Information Systems, Heinz College, Carnegie Mellon University

Security issues often occur when there are disconnects between users’ understanding of their role in computer security and what is expected of them. To help users make better security decisions, we need insights into the daily challenges users face. We have developed the Security Behavior Observatory (SBO), a panel of participants consenting to our observing their daily computing behavior, so that we can understand what constitutes “insecure” behavior. By combining qualitative user interviews with quantitative system measurements from the SBO, we propose to undertake several studies that aim to precisely qualify what constitutes risky behavior. More specifically, we want to determine what are the specific actions users take that result in an insecure system, and why users undertake these actions in the first place. Ultimately, a better understanding of how users get infected could inform future policies towards unwanted software distribution, and can help us design more effective user-centered mitigations.

 

 Using Individual Differences to Tailor Security Mitigations

Lead: Serge Egelman, Senior Researcher, International Computer Science Institute

Partners: Eyal Peer, Senior Lecturer and the Head of Marketing, Graduate School of Business Administration, Bar-Ilan University

While the burgeoning field of “usable security” has made security mechanisms more usable by humans in general, prior research invariably has come up short in that not all humans respond the same to stimuli or have the same preferences. Our goal is to examine the ways in which security mitigations can be tailored to individuals, and how this is likely to result in even greater security compliance than what has been previously achieved through user-centric design. While previous work shows that individual differences are predictive of privacy and security attitudes, further research is needed to explore the myriad ways in which this can be applied. Our research agenda will center around reframing security mitigation designs so that they are targeted at the decision-making dimensions that we previously found to be predictive of computer security attitudes. This will include iterative human-subjects experimentation to evaluate whether targeted mitigations result in greater compliance.

 

Robust Access in Hostile Networks

Leads: David Fifield, PhD candidate, Department of Electrical Engineering and Computer Sciences; Doug Tygar, Professor, Department of Electrical Engineering and Computer Sciences and School of Information; Xiao Qiang, Adjunct Professor, School of Information

Our research is about providing safe access to the Internet in places where network access is restricted or censored. Many people are limited in what they can say and do online because of restrictive filters that block websites. These filters also put people at risk of surveillance and infection by viruses and other forms of malware. We build and operate systems that circumvent network obstacles and enable people to access blocked websites safely. We do this by, for example, disguising visits to blocked websites so they appear to be something else to the network filters. We emphasize systems that are practical and deployable in the near term, and that will continue working even after they have become popular.

 

Secure Machine Learning for Adversarial Environments

Leads: Anthony Joseph, Professor, Department of Electrical Engineering and Computer Sciences; Doug Tygar, Professor, Department of Electrical Engineering and Computer Sciences and School of Information

Abstract: We plan to build a pipeline that leverages novel robust secure machine learning techniques to detect and defeat cybersecurity threats against computer systems. Modern cyber-threats to computer systems and constitutes a game-theoretic arms race in which sophisticated, well funded attackers evolve to evade detection and detection mechanisms react. We plan to analyze the effects of game theory on a robust secure machine learning system, specifically where adversaries are attempting to evade or mislead the machine learning system, and develop novel techniques for making the system robust against such attacks. We have received large datasets for our study from several prominent commercial providers.

 

Privacy, Disclosure, and Social Exchange Theory

Lead: Jennifer King, PhD candidate, School of Information

The privacy of one’s personal information—the choice of when to disclose it and to whom, how one maintains control over it, and the risks of disclosure—continues to be a topic of much debate and research. My research draws on a theoretical orientation from the social sciences—social exchange theory (SET)—to explore personal information disclosure, looking specifically at the context of an exchange relationship between an individual and the company or service receiving the information. This project explores ‘new meanings of privacy’ by examining novel aspects of personal information disclosure, a decision-making process that also has a direct impact on the cybersecurity agenda. Given that many of the problems in cybersecurity originate with human behavior, this research agenda works to expand the understanding of the dynamics of personal information disclosure beyond a focus on individual cognition, incorporating the social and organizational mechanics that also influence decision-making.

 

(Im)balances of Power in the Age of Personal Data

Lead: Paul Laskowski, Assistant Adjunct Professor, Berkeley School of Information

Partners: Benjamin Johnson, Research Associate, Carnegie Mellon University

Emerging technologies that record and analyze our purchases and behaviors are fueling new business models and supporting valuable products, but also eroding our expectations of privacy.  To assess the true cost of personal data collection, we must look beyond individual decisions to the cumulative impact of many citizens on technology and institutions.   This initiative examines the role of citizens as a whole, especially in relation to governments and corporations. Applying a combination of game theoretic modeling and behavioral studies, we seek to measure how individual privacy choices enable or prevent specific abuses of power.  Our methods are aimed at immediate trends in data collection, as well as long-term scenarios.  The ultimate deliverable for the project will be a report outlining possible structures that personal data exchange markets may take 30 years into the future.  By looking so far into the future, we hope to bring a new perspective on technological practices and policy decisions.

 

Constructing Intermediary Policies to Effectively Deter Financially-Motivate​d Cyber Criminals

Lead: Damon McCoy, Staff Researcher, International Computer Science Institute

Partners: Chris Jay Hoofnagle, Adjunct Full Professor, School of Information; Vern Paxson, Professor, Department of Electrical Engineering and Computer Sciences, and Director, Networking and Security Group, International Computer Science Institute

In recent years, policymakers have changed their approach to regulating financially-motivated cybercrime. Instead of pursuing individual bad actors, new policies seek to alter the structural relationships in cybercrime by regulating intermediaries used by computer criminals. These include: financial disincentives, financial account closures, asset seizers and blacklisting individuals from interacting with financial institutions. We seek to perform an exploratory investigation of key questions regarding these new methods of deterring cyber criminals, such as: What intermediary-regulation approaches have been taken by policymakers and how do they differ? How effective are these policies? What pitfalls might they have in terms of collateral damage? How might we improve these policies to have adequate oversight? In summary, this project seeks to undertake an exploratory investigation to understand the feasibility of identifying effective policies that will serve as strong deterrents to financially-motivated cyber criminals without unduly impacting companies and individuals not involved in cyber criminal activities.

 

Cybersecurity: Meaning and Practice

Lead: Deirdre Mulligan, Associate Professor, School of Information

Partners: Kenneth A. Bamberger, Professor of Law; David Bamman, Assistant Professor, School of Information; Geoffrey Nunberg, Adjunct Professor, School of Information; Elaine Sedenberg, PhD candidate, School of Information; Richmond Wong, PhD candidate, School of Information

There is little empirical research documenting the various meanings of cybersecurity in use in distinct communities, their relationships, and the activities they drive in practice.  This project, “Cybersecurity: Meaning and Practice” seeks to expand upon the examination of cybersecurity’s meaning using the theoretical framework of “securitization” (Nissenbaum 2005) and explore the relationship between meaning and practice. Our long-term goal is to conduct qualitative semi-structured expert interviews, quantitative text analysis, and discourse analysis to provide an (admittedly partial) answer to the foundational question of cybersecurity’s meaning, its relationship to practices in the field including policy development, funding, and organizational activities to advance cybersecurity, and the cybersecurity futures and risks it imagines.  Under this scoping grant, we will define the selection criteria and boundaries for the subjects of analysis, and assess the landscape of available data in different domains that can form the foundation for empirical study.

 

Social Media Data and Cybersecurity

Lead: Galen Panger, PhD candidate, School of Information

With the rise of Big Data and the tools of data science, researchers have begun to develop new predictive algorithms on people and society, making inferences about their health, well-being and livelihoods from vast streams of data. This research program will examine the scientific validity of an increasingly popular though experimental approach that makes inferences about individual and societal well-being from sentiment analysis of publicly available social media data. If validated, such a social media indicator of well-being would be a valuable aid to policymakers in evaluating the impact of new policies, but might also aid adversaries in evaluating their efforts to harm or disrupt a populace. Understanding how well predictive algorithms like this work will help policymakers better understand the risks of mass self-disclosure online, and will contribute to a more expansive vision of cybersecurity, which includes the harnessing of new data sources to understand and promote social well-being. The grant will support a pilot study to evaluate the viability of the research design and procedures for a planned large-scale validity study.

 

Illuminating and Defending Against Targeted Government Surveillance of Activists

Lead: Vern Paxson, Professor, Department of Electrical Engineering and Computer Sciences, and Director, Networking and Security Group, International Computer Science Institute

Partners: Bill Marczak, PhD candidate, Department of Electrical Engineering and Computer Sciences; Nick Weaver, Senior Scientist, International Computer Science Institute

This effort focuses on developing a deeper understanding of the nature, scope and prevalence of abusive state-level surveillance and its extra-judicial use as a potent form of social control, as seen through the lens of targeted surveillance of activists and political opponents.  Drawing upon a network of targeted individuals and groups in the Middle East with whom we have established extensive ties, we will pursue (1) targeted-focused forensics (analyzing the nature of such attacks, the perpetrators, and their associated activity, identifying new attack modalities as they emerge); (2) global measurements (assessing the prevalence of surveillance techniques by employing broad scanning and DNS cache-inference techniques to profile the footprint of surveillance spyware), and (3) target-appropriate defenses specifically focused on the needs of at-risk activists and NGOs (including approaches for automatically vetting messages for social engineering attacks).

 

Corrigibility in Artificial Intelligence Systems

Lead: Stuart Russell, Professor, Department of Electrical Engineering and Computer Sciences

Partners: Patrick LaVictoire, Research Fellow, Machine Intelligence Research Institute

This project will focus on basic security issues for advanced AI systems. It anticipates a time when AI systems are capable of devising behaviors that circumvent simple security policies such as “turning the machine off.” These behaviors, which may include deceiving human operators and disabling the “off” switch, result not from spontaneous “evil intent” but from the rational pursuit of human-specified objectives in complex environments. The main goal of our research is to design incentive structures that provably lead to corrigible systems – systems whose behavior can be corrected by human input during operation.

 

Blazar: Secure and Practical Program Hardening

Lead: Dawn Song, Professor, Department of Electrical Engineering and Computer Sciences

Partners: Chao Zhang, Postdoctoral Scholar, Department of Electrical Engineering and Computer Sciences

One root cause of cyber security threat is vulnerabilities in programs. Complex software inevitably have vulnerabilities which can allow attackers to exploit to compromise the system. We propose to design and develop a hardening solution to protect programs from attacks even when they may contain vulnerabilities. In particular, we propose a secure and practical solution, Blazar, to automatically rewrite vulnerable programs to enforce certain security policies, and thus to protect them from attacks even when the original program may contain vulnerabilities. Blazar is transparent to developers, and thus easy to use. It is designed to have low performance overhead. Blazar leverages our earlier development to build a secure and practical solution that we plan to deploy in practice.

 

Defense Against Social Engineering Attacks

Leads: David Wagner, Professor, Department of Electrical Engineering and Computer Sciences; Vern Paxson, Professor, Department of Electrical Engineering and Computer Sciences, and Director, Networking and Security Group, International Computer Science Institute

Partners: Grant Ho, PhD candidate, Department of Electrical Engineering and Computer Sciences

We will study how to detect targeted social engineering attacks that occur online. We will especially focus on spear phishing, which in recent years has been used to penetrate many enterprise and government systems. For instance, spear phishing has allowed attackers to steal over 40 million personal health records from major insurance companies and obtain background check information on over 20 million people from government systems. We will develop methods to detect spear phishing attacks, based on the patterns they induce in attack emails, with the goal of enabling organizations to defend themselves against this attack vector. We will also study other kinds of digital social engineering attacks. Our work aims to develop an understanding of exploitable human interactions in computer systems and derive a new set of techniques to prevent dangerous interactions.

 

Projects Jointly Funded with the Center for Technology, Society & Policy

A User-Centered Perspective on Algorithmic Decision-Making

Leads: Emily Paul, G.S. Hans, Pavel Vanegas, Rena Coen

Algorithmic personalization drives much of the content we encounter online, from search results and movie recommendations to the ads we see and the prices we are offered. Much of this personalization saves us time and helps us find what we are looking for. However, personalization can also disadvantage individuals. Our research is aimed at improving understanding of how people think about online personalization and when people believe personalization becomes too targeted or discriminatory. By providing insight into how people feel about and understand algorithmic personalization, this research will contribute a user-centered perspective to guidelines that the Center for Democracy and Technology is developing for the fair and responsible use of algorithms.

 

The Value of Respect: Reclaiming the Philosophical and Political Foundations of Informed Consent for the Era of Big “Things”

Leads: Anna Lauren Hoffmann, Elaine Sedenberg

The proliferation of sensors, social networks, and massive data repositories presents an unprecedented opportunity to study human behavior. But this opportunity poses new challenges to the protection of individuals and groups by respecting privacy and autonomy, ensuring data security, and considering unforeseen consequences. Organizations have been left with research ethics frameworks and legacy consent processes that are poorly suited for modern data analyses and extended timescales. By marrying legal and policy analysis of informed consent with careful explication of respect itself, we plan to develop a penetrating discussion of 1) the ideal of respect for persons and 2) how informed consent has, at various point in its development, sought to operationalize this ideal in various contexts. Foregrounding the connection between respect and informed consent—and critically interrogating both—is, we argue, an integral step towards the development of a 21st century research ethic and actionable policy recommendations.

 

Operationalizing Privacy for Open Data Initiatives: A Guide for Cities

Leads: Nathan Malkin, Sona Makker

Open data is a powerful tool for supporting digital citizenship. Sharing government data with the public can enable transparency, encourage civic participation, and empower communities. However, the information can be sensitive or carry privacy implications. Compliance with relevant laws and studying potential consequences can make the process costly for individual cities and even discourage them from releasing data. While this problem is common, there is, at present, no shared solution. The goal of our project is to create concrete, actionable guidelines cities can follow to provide open data while complying with laws and staying consistent with privacy expectations.