Center for Long-Term Cybersecurity Announces 2018 Research Grantees

The UC Berkeley Center for Long-Term Cybersecurity (CLTC) is proud to announce the recipients of our 2018 research grants. In total, 37 different groups of researchers will share a total of over $1 million in funding to support a broad range of initiatives related to cybersecurity and other emerging issues at the intersection of technology and society.

The purpose of CLTC’s research funding is to not only address the most interesting and complex challenges of today’s socio-technical security environment, but also grapple with the broader challenges of the next decade’s environment. The Center is focusing its research in four priority areas: machine learning and artificial intelligence, building the cyber-talent pipeline, improving cybersecurity governance, and protecting vulnerable online populations.

Thirteen of the projects are renewals of projects that were funded previously by CLTC. New initiatives to be funded include improving the cybersecurity of local governments; protecting vulnerable individuals and organizations from state surveillance; defending against social engineering attacks; understanding the security implications of 5G networks; developing secure contracts through blockchain; and more.

“We’re extremely excited both to have reinvested in some of the most exciting projects from the first cohorts of CLTC grantees, and to continue to grow our community of CLTC researchers,” said Betsy Cooper, Executive Director of CLTC.

The 37 winning proposals were chosen through review by a cross-disciplinary committee made up of UC Berkeley faculty members and administrators. Two types of grants were awarded: seed grants, generally below $15,000, are intended to fund an exploratory study, while discrete project grants of up to $100,000 were given to projects that have defined boundaries with clear outcomes and impact potential. All principal investigators (PIs) have a UC Berkeley research affiliation, but many of the initiatives involve partners from outside institutions, including Bar-Ilan University, Carnegie Mellon, the City and County of San Francisco, New York University, Norwegian University of Science and Technology, The Policy Lab, University of British Columbia, University of Michigan, University of Washington, and the United States Department of Agriculture.

“Cybersecurity as a problem set continues to expand from the growing technical challenges of digital systems, to the political, economic, and societal challenges that digital-human interactions present,” said Steven Weber, Faculty Director of CLTC and Professor in the UC Berkeley School of Information. “CLTC is privileged to be supporting a broad range of basic and applied research projects that will contribute to making the future digital environment a place where human beings actually want to live.”

 

Summary Descriptions of CLTC 2018 Research Grantees

Below are short summaries of new research projects that will be funded by the UC Berkeley Center for Long-Term Cybersecurity through 2018 research grants.

Advanced Encryption Technologies for the Internet of Things and Data Storage Systems

Sanjam Garg, Assistant Professor, UC Berkeley Department of Electrical Engineering and Computer Science (EECS); Daniel Masny, Postdoctoral Researcher, EECS

The main goal of this project is to realize advanced encryption technologies for specialized applications, such as key-dependent message (KDM) or related-key attack (RKA) security. A growing number of applications demand stronger security from encryption schemes than what is guaranteed by the classical encryption methods. This project will provide new encryption technologies that satisfy these more stringent security requirements. More specifically, the research will obtain new encryption schemes that remain secure even when the adversary can get access to some information about the secret key itself.

Building Decentralized Contract Systems with Strong Privacy

Alessandro Chiesa, Professor, EECS

The goal of this project is to design a system for smart contracts with strong privacy guarantees. Through a combination of protocol design and cryptographic engineering, the researchers aim to develop a system in which any participating user may broadcast a transaction that attests to the offline execution of a piece of code, without revealing any information except for the number of inputs and outputs involved. The goal is to make such a system practical for use in a wide variety of contract types, including crowdfunding, voting, and marketplaces.

Cybersecurity Awareness for Vulnerable Populations

Ahmad Sultan, Master of Public Policy Candidate, Goldman School of Public Policy, UC Berkeley

The City of San Francisco’s Digital Inclusion Initiative focuses on addressing the technology needs of the most underserved neighborhoods and communities, including vulnerable populations, a group defined as seniors, disabled, low-income, and limited-English speaking populations. Interviews have revealed that these populations are most likely to mention being victim to cyber-attacks, such as malware and phishing. This project will use a mix of surveys to assess whether the nature of cyber-crime and cybersecurity create unique problems specific to vulnerable populations and whether the scope of cyber-crime is significantly larger for these populations compared to the general population. The end goal is to provide a set of recommendations to the City that could reduce cyber-crime for the vulnerable.

Cybersecurity for Urban Infrastructure

Alison E. Post, Associate Professor, Political Science and Global Metropolitan Studies, UC Berkeley

The cybersecurity of urban infrastructure systems—including water and transit systems– is crucial, given their fundamental importance for everyday life. Yet there is little research on the cybersecurity of such systems, and what research exists tends not to incorporate social-scientific approaches to understanding human cognition and behavior, and especially perceptions and behaviors related to risk and risk avoidance. This project addresses this gap by assembling a team of social scientists, civil engineers, and computers scientists to develop a research program focused on reducing the risk of security breaches in urban infrastructure systems. The project builds on interdisciplinary discussions of the topic through the Social Science Matrix and Global Metropolitan Studies-funded “Berkeley Infrastructure Initiative.”

Cybersecurity Toolkits of/for the Future: A Human-Centered and Design Research Approach

James Pierce, Adjunct Faculty, Jacobs Institute for Design Innovation, UC Berkeley; Sarah Fox, PhD Candidate, Tactile and Tactical (TAT) Design Lab, University of Washington; Richmond Wong, PhD Student, UC Berkeley School of Information; Nick Merrill, PhD Candidate, UC Berkeley School of Information

The cybersecurity toolkit—collections of digital tools, tutorials, tips, best practices, and other recommendations—has emerged as a popular approach for preventing and addressing cybersecurity threats and attacks. Often these toolkits are oriented toward vulnerable populations who have unique and pressing needs related to cybersecurity but may not have access to the resources of large governments, corporations, or other organizations. Many such tools are designed specifically for journalists, researchers, political activists, and certain minority groups such as refugees and LGBTQ youth. This project is concerned with studying this category of cybersecurity tools to inform the design and development of cybersecurity toolkits for both near-term and far-term futures. The project will adopt a mixed methodological approach rooted in human-computer interaction (HCI) and design.

Deep Fairness in Classification

Matt Olfat, PhD Student, Industrial Engineering and Operations Research Department (IEOR), UC Berkeley; Anil Aswani, Assistant Professor, IEOR

Automated decision-making systems using machine learning have found much commercial success, but there is a concern that the output of these systems perpetuates the effects of discrimination present in the training data. For instance, machine learning classifiers and rating scores are prone to codifying biases (inherent in the data) against protected classes such as race, gender, or political affiliation. Using anonymized financial and health data sets for the purpose of testing the computational speed, predictive ability, and other variables, these researchers are developing algorithms that improve the fairness of classifiers.

Enhancing Security Using Deep Learning Techniques

Dawn Song, Professor, EECS; Chang Liu, Postdoctoral Scholar, UC Berkeley

Recent advances in deep learning have led to state-of-the-art results on a diverse set of tasks, such as image classification, caption generation, natural language processing, and speech recognition. This research team will investigate the use of deep learning techniques to enhance the security of applications, including in IoT devices and crypto-currency smart contracts. In particular, they will investigate how deep learning techniques can help identify and locate vulnerabilities in programs and alert their users to apply patches; they will research how to apply deep learning techniques to prevent abusive usage of technologies, especially robocalls; and they will investigate approaches for mitigating the threats of adaptive attackers.

Human-Centric Research on Mobile Sensing and Co-Robotics: Developing Cybersecurity Awareness and Curricular Materials

Alice M. Agogino, Roscoe and Elizabeth Hughes Professor of Mechanical Engineering, Education Director of the Blum Center for Developing Economies; Euiyoung Kim, Postdoctoral Design Fellow, Jacobs Institute for Design Innovation, Department of Mechanical Engineering, UC Berkeley; Matilde Bisballe Jensen, PhD Candidate, Norwegian University of Science & Technology (NTNU).

This research initiative will focus on the privacy awareness of users of mobile sensing devices and co-robots in domestic settings. The researchers will focus specifically on older, less technically-savvy users and parents with children, who are a particularly vulnerable user group in the case of co-robotics in domestic settings. Drawing upon user approaches to phishing and malware in the online domain, the researchers will aim to create relevant guidelines on cybersecurity behavior for private users, and to inform designers on how to more effectively build cybersecurity awareness and features into their product design.

Malpractice, Malice, and Accountability in Machine Learning

Joshua Kroll, Postdoctoral Research Scholar, UC Berkeley School of Information; Nitin Kohli, PhD Student, UC Berkeley School of Information

As data-driven systems, data science, and machine learning become more prevalent and important, the imperative to govern such technologies grows, too. However, there is currently no accepted definition of best practice, malpractice, or negligence for such systems. This project will study malpractice in data analysis and work to identify the nature of malice in data science, separating it from malpractice and negligence; develop a taxonomy of failure modes for such systems; produce a continuously updated catalog of data-driven system failures; and explore the space of possible attacks and mitigations available to the designers of data-driven systems.

The Mice that Roar: Small States and the Pursuit of National Defense in Cyberspace

Melissa K. Griffith, PhD Candidate, Department of Political Science, UC Berkeley

This dissertation research will investigate how small states pursue national cyber-defense, both individually and collectively. Using cross-national case studies focused on Denmark, Estonia, Finland, and other Nordic and Baltic states, the project addresses a gap in existing political science research by examining small states’ national cyber-defense postures, focused on two interrelated questions: why and how have small countries with comparatively limited resources become significant providers of national cyber-defense for their populations? And how has this threat shaped their defense posture both individually (the acquisition of domestic cyber-capabilities) and collectively (through military alliances and international institutions)?

Model Agnostic Estimation of Threat Probabilities

Venkatachalam Anantharam, EECS

Providing improved attack probability estimation is a tangible and useful way to shore up cyberdefenses for individual, technological, and societal systems. This research is aimed at developing a data-driven estimation framework for assessing the probabilities of diverse cybersecurity threats. The researchers are developing algorithms for estimating threat probabilities that are model agnostic, i.e. without a presumption that the kinds of attacks are known, meaning they can estimate new and as yet unseen attacks.

Post and Re-trauma: Enhancing the Cybersecurity of Sexual Assault Victims on Facebook

Hadar Dancig-Rosenberg, Associate Professor, Bar-Ilan University Faculty of Law, Visiting Professor, Berkeley Institute for Jewish Law and Israel Studies; Dr. Anat Peleg, Lecturer, Faculty of Law and Director of the Center for the Study of Law, Media at Bar-Ilan University; Roy Rosenberg, Senior Partner and Director, Economic Regulation Department, Ascola Economic and Financial Consulting LTD

In the last few years, social media has created supportive spaces where sexual assault victims can share their testimonies, yet social media also serves as a forum for victimizing activities such as stalking, harassing, and humiliating survivors. This study aims to uncover and empirically characterize the negative aspects of Facebook use by sexual assault survivors in the U.S. and Israel who have exposed their stories online. Through an anonymous online survey and complementary interviews with select subjects, the researchers seek to find the relations between various parameters, such as the severity and type of the online abusive behavior, the circumstances in which it appears, and the characteristics of the victim, the abuser, and their relationship.

Privacy Analysis at Scale: A Study of COPPA Compliance

Serge Egelman, Director, Usable Security & Privacy Group, International Computer Science Institute (ICSI); Irwin Reyes, Researcher, ICSI; Primal Wijesekera, PhD Candidate, Department of Electrical and Computer Engineering, University of British Columbia; Amit Elazari, Doctoral Law Candidate, UC Berkeley School of Law, CTSP Fellow, UC Berkeley School of Information.

This research team has launched a successful platform for detecting violations of the Children’s Online Privacy Protection Act (COPPA) at scale by automatically observing the behaviors of tens of thousands of free Android apps. They also have developed a website for use by parents, who can review the privacy behaviors of apps to make more informed choices, as well as regulators, who can take actions against bad actors. With this grant, they will develop a testing API to help developers prevent privacy violations in their apps prior to release, and they will expand their tools for use in virtualized environments and introduce crowdsourced user input, which will allow them to execute and inspect many mobile apps simultaneously.

Privacy Localism Conference Travel Grant Proposal

Ahmad Sultan, Master of Public Policy Candidate, Goldman School of Public Policy, UC Berkeley

As the pendulum of interest in privacy policy swings from Washington D.C. to local governments, city officials are taking up the responsibility of inspecting regulatory space for improved privacy protections. For his capstone project, the grantee will conduct an advanced policy analysis focused on improving cybersecurity policy and cyber-hygiene in local government. To inform this research, he will attend a conference, “Privacy Localism: A New Research Agenda,” hosted by the Information Law Institute at New York University. Through a series of panel discussions and a roundtable meeting, participants will discuss the future of local government privacy policy. The project will assist the Committee on Information Technology (COIT), the premiere tech policy body for the City and County of San Francisco.

Probing the Ambivalence of Facial Recognition Technologies in China: An Ethnographic Study of Megvii

Michael Kowen, PhD Student, Department of Sociology, UC Berkeley

This researcher will undertake a residency and conduct an ethnography of Megvii, a startup company in Beijing that develops smart surveillance technologies, particularly facial recognition systems, using artificial intelligence techniques. Through interviews and ethnographic observations at this particular firm, the project will generate insights into how Chinese experts who currently develop these kinds of technologies perceive the value and significance of digital technologies and especially how their concerns about the security and privacy of users influence the development of their products.

Repercussions of Cyber-Security Measures in U.S. High Schools

Anne Jonas, PhD Student, UC Berkeley School of Information

Public high schools in the United States have increasingly digitized administrative and instructional systems over the last twenty years. Consequently, the security and welfare of students has been a primary concern, as reflected in regulations like the Federal Children’s Internet Protection Act (CIPA). Yet some cybersecurity measures intended for children’s protection may cause them harm, for a variety of reasons regulators and designers may not have anticipated. This research will examine potential tensions around cybersecurity measures in schools, their assumptions about who counts as stakeholders, and the ensuing, if often unintentional effects on privacy, discipline, equity, and learning, in order to establish policies and procedures that do not further discriminate against vulnerable populations, inflict harm, or substantially diminish learning and social opportunities.

Responding to Emerging Protection Threats in Cyberspace

Alexa Koenig, Executive Director, Human Rights Center; Joseph Guay, Associate, The Policy Lab; Lisa Rudnick, Principal and Founding Partner, The Policy Lab; Leeor Levy, Principal, The Policy Lab

The growing use of cyber capabilities against civilian populations as a means and method of warfare is presenting new vulnerabilities and heightened risk profiles for refugees, internally-displaced populations (IDPs), and other civilian and civil society groups. However, humanitarian and human rights practitioners are hindered in their ability to keep pace with such threats and challenges because of a lack of information, skills, and tools needed to mitigate them. With support from this grant, the Human Rights Center (HRC) at UC Berkeley is partnering with The Policy Lab to launch an innovative, applied Cyber Program at the intersection of cyber- and human security as an effort to contribute to improved protection outcomes for civilian populations and other vulnerable groups.

Ride Free or Die: Overcoming Collective Action Problems in Autonomous Driving Governance

Deirdre K. Mulligan, Associate Professor, School of Information, UC Berkeley, Faculty Director, Berkeley Center for Law & Technology; Adam Hill, Government Information Specialist, USDA FSIS

Autonomous vehicles promise to save up to 30,000 lives per year and prevent 94% of all road accidents, according to the U.S. National Highway Traffic and Safety Administration. Yet, paradoxically, research suggests that regulations mandating algorithms that optimize for saving lives threaten to create a collective action problem that could delay adoption of autonomous vehicles. This project seeks to identify precise behavioral interventions that could resolve the collective action problem. Through experimental and qualitative studies, it builds a toolbox of behavioral interventions to reduce consumer resistance to regulation and to restyle regulations in ways that induce pro-social behavior. Pulling these insights together, the project will prototype a blockchain-based tool that will reduce free riding and optimize the life saving potential of autonomous vehicles.

rIoT: Quantifying IoT Costs and Harms

Kimberly Fong, MIMS student, UC Berkeley School of Information; Kurt Hepler, MIMS student, UC Berkeley School of Information; Rohit Raghavan, MIMS student, UC Berkeley School of Information; Peter Rowland, MIMS student, UC Berkeley School of Information

As the proliferation of consumer Internet of Things (IoT) devices continues, so too do security problems that impact users, companies, and the Internet as a whole. But who is responsible when attacks from IoT-based threats like the Mirai Botnet cripple the Internet? This project examines the costs that insecure IoT devices pose to average consumers, and we consider how these costs should factor into how liability is assigned for damages caused by IoT botnet attacks. By quantifying real and perceived costs and risks of insecure devices, the researchers hope to help consumers make more informed decisions about the devices they purchase and encourage manufacturers to design more secure devices.

The Role of Private Ordering in Cybersecurity: Towards A Cybersecurity License

Amit Elazari, Doctoral Law Candidate, UC Berkeley School of Law; Research Fellow, CTSP, UC Berkeley School of Information

The law governing the information economy is often not prescribed by legislators or courts, but rather by private entities using technology and standard-form contracts. This research proposal seeks to account for the often unobserved role of private ordering in the future of cybersecurity, and suggests that, similar to the revolutionary role contracts played in the copyright industry, form-contracts could serve an important form of regulation, driving a positive change in the evolving cybersecurity landscape.

Secure Internet of Things for Senior Users

Alisa Frik, Postdoctoral Fellow, ICSI; Serge Egelman, ICSI; Florian Schaub, Assistant Professor, University of Michigan School of Information; Joyce Lee, Masters Degree Candidate, UC Berkeley

Older adults are increasingly involved in the use of emerging technologies, especially in the domain of healthcare. Examples include wearable devices for medical measurements, context-aware safety monitoring, fall sensors, and therapeutic robots. However, due to potentially limited technological literacy and high probability of physical or mental impairments, older adults are particularly vulnerable to the cybersecurity risks posed by these devices. This research aims at better understanding of the privacy and security attitudes of the geriatric population with respect to emerging healthcare technologies, and designing an effective system that will empower informed decisions, better control over personal data, and improved security for these users.

Security Implications of 5G Networks

Jon Metzler, Lecturer, Haas School of Business, Associated Faculty, Center for Japanese Studies, UC Berkeley

5G wireless service promises to enable higher bandwidth, lower latency wireless services, not just for consumers, but also for enterprise and industrial assets. If 5G wireless networks are used to support societal infrastructures such as cities, vehicles, and industrial assets, these represent new dependencies, and the costs of outages or security incidents could be even higher than with current consumer cellular incidents. The researchers will undertake a survey of security risks posed by 5G wireless network deployments, as well as potential practices for risk mitigation. The research is in advance of commercial 5G network deployments and thus has potential value to network operators, regulators and consumers.

Statistical Foundations to Advance Provably Private Algorithms

Paul Laskowski, Adjunct Assistant Professor, UC Berkeley School of Information

In the past year, several major companies have deployed data processing systems based on differential privacy. These systems use mathematical techniques to limit the information that can be learned about individuals, but a number of limitations threaten the success of this approach. This research will bring statistical techniques to bear on private algorithms, expanding the range of scenarios for which meaningful privacy guarantees can be supported, and providing a theoretical basis to understand related privacy standards. These techniques will be applied in a prototype deployment of a provably private database.

Uncovering the Risk Networks of Third-Party Data Sharing in China’s Social Credit System   

Shazeda Ahmed, PhD Student, UC Berkeley School of Information

China’s “social credit system” is a personal and behavioral data-driven effort to publicly rate the trustworthiness of individuals, drawing upon data from people’s social networks, online purchases, and video game consumption. Major cybersecurity risks are looming under the surface of social credit: as third-party services partner with Chinese tech firms to provide benefits to consumers related to their credit scores, large amounts of personal data are moving across a largely invisible network of companies. This project proposes an interview-driven case study of third-party data sharing agreements, with a qualitative investigation focused on Sesame Credit, the product that dominates the social credit market and has over 450 million users.

 

Projects Jointly Funded with the Center for Technology, Society & Policy

Everyone Can Code? Race, Gender, and the American Learn to Code Discourse

Kate Miltner, PhD Candidate, USC Annenberg School for Communication and Journalism, Visiting Student Researcher, UC Berkeley Center for Science, Technology, Medicine, & Society

Corporations, politicians, and educators alike have positioned computer programming as essential for individual job success, and teaching “underrepresented minorities” to code is also frequently offered as a solution for the often-problematic gender and racial politics of Silicon Valley corporations. This project examines the power relations of the learn-to-code trend, particularly in terms of race and gender politics. By studying this phenomenon in both theory and practice—and placing it in relevant historical context—this project interrogates the popular belief that mass technological skills training will necessarily result in increased equity within Silicon Valley corporations.

Menstrual Biosensing Survival Guide

Noura Howell, PhD Student, UC Berkeley School of Information; Sarah Fox, PhD Candidate, University of Washington, Visiting Scholar, EECS; Richmond Wong, PhD Student, UC Berkeley School of Information

Biosensing technologies are increasingly present, predicting bodily or emotional health and offering promises of improved efficiency or personal wellness. Menstrual tracking apps, for example, encourage users to report intimate details, from the duration of periods, cervical mucus texture, emotional state, to sexual behavior. These apps offer benefits but also pose risks in the case of a security breach or as practices of sharing health data become more prominent in the workplace. This project will conduct a review of existing menstrual biosensing technologies, their data policies, and users’ existing data practices to outline this rapidly shifting field. The project has potential to help users protect their intimate data privacy, and rethink assumptions of how these apps configure their users.

 

Renewed Projects

The following projects were funded in previous years and have been awarded renewal funding for 2018. Learn more about these initiatives on our 2017 grantee announcement.

Addressing the Privacy Gaps in Healthcare

Ruzena Bajcsy, Professor, EECS; Daniel Aranki, PhD Candidate, EECS

Adversarially Robust Machine Learning

Sadia Afroz, Research Scientist, ICSI

Allegro: A Framework for Practical Differential Privacy of SQL Queries

Dawn Song, Professor, EECS; Joseph Near, Postdoctoral Researcher, EECS

Defense against Social Engineering Attacks

David Wagner, Professor, EECS; Vern Paxson, Professor, EECS, and Director, Networking and Security Group, ICSI

Exploring Internet Balkanization through the Lens of Regional Discrimination

Jenna Burrell, Associate Professor, UC Berkeley School of Information; Anne Jonas, PhD Student, UC Berkeley School of Information

Identifying Audio-Video Manipulation by Detecting Temporal Anomalies

Alexei Efros, Associate Professor, EECS, and Andrew Owens, Postdoctoral Scholar, EECS

Illuminating and Defending Against Targeted Government Surveillance of Activists

Vern Paxson, Professor, EECS, and Director, Networking and Security Group, International Computer Science Institute; Bill Marczak, Postdoctoral Researcher, UC Berkeley

The International Coordination of Cybersecurity Industrial Policies

Vinod Aggarwal, Senior Faculty Fellow and Professor, UC Berkeley Department of Political Science; Andrew Reddie, PhD Candidate, UC Berkeley Department of Political Science

NilDB: Computing on Encrypted Databases with No Information Leakage    

Alessandro Chiesa, Assistant Professor, EECS; Raluca Ada Popa, Assistant Professor, EECS

Secure Machine Learning

David Wagner, Professor, EECS; Michael McCoyd, PhD Student, EECS; Nicholas Carlini, PhD Student, EECS

The Security Behavior Observatory

Serge Egelman, Director, Usable Security & Privacy Group, ICSI; Alessandro Acquisti, Professor of Information Technology and Public Policy, Heinz College, Carnegie Mellon University (CMU); Lorrie Faith Cranor, Professor of Computer Science and of Engineering and Public Policy, Carnegie Mellon University; Nicolas Christin, Assistant Research Professor in Electrical and Computer Engineering, Carnegie Mellon University; Rahul Telang, Professor of Information Systems and Management, Heinz College, Carnegie Mellon University

Stakeholder Workshop on Deterring Financially Motivated Cybercrime

Chris Hoofnagle, Adjunct Full Professor, School of Information and School of Law, UC Berkeley; Aniket Kesari, JD/PhD Student, UC Berkeley School of Law; Damon McCoy, Assistant Professor, New York University

User Authentication Using Custom-Fit Ear EEG

John Chuang, Professor, UC Berkeley School of Information