The UC Berkeley Center for Long-Term Cybersecurity (CLTC) is proud to announce the recipients of our 2021 research grants. In total, 14 different student-led research groups have been awarded grants to support initiatives related to digital security issues emerging at the intersection of technology and society. Three of the projects were jointly funded with the UC Berkeley Center for Technology, Society & Policy, a multi-disciplinary research center focused on emergent social and policy issues of technology.
We are also pleased to announce that we have awarded two UC Berkeley graduate students the 2021 Cal Cybersecurity Research Fellowship: Tanu Kaushik, a student in the School of Information’s Master of Information and Cybersecurity (MICS) program, and Ji Su Yoo, a UC Berkeley PhD student in the School of Information. This fellowship will support Kaushik’s research on threats posed by adversarial machine learning and detection, protection, response, and recovery mechanisms for known attack techniques; and will support Yoo’s work, which looks at misinformation corrections, specifically how the message and identity of the messenger impact various social groups’ reception of misinformation corrections. The Cal Cybersecurity Research Fellowship is made possible by a generous gift from an anonymous donor.
As the events of 2020 have shown us, the effort to improve digital security in the public interest is only increasing in importance. Many of the funded projects are already yielding important results, including research on privacy controls for always-listening devices and on organizations’ vulnerability management and remediation processes. New initiatives to be funded include studies on the cybertalent pipeline, the usability of privacy and security controls on smartphone devices, advancing machine learning defenses against adversarial attacks, and more.
“CLTC is delighted to be able to support UC Berkeley researchers working at the forefront of cybersecurity for the sixth year in a row,” says Ann Cleaveland, Executive Director of CLTC. “These students have continued to advance groundbreaking work in the face of the extraordinary challenges of the past year. The research being done by our grantees is crucial for informing changes in the world of cybersecurity behaviors, technologies, policies, markets, and beyond. Congratulations to our 2021 grantees.”
Learn more about out our 2016, 2017, 2018, 2019, and 2020 grantees. You can also search all our past and present grants on on our Grants page.
CLTC 2021 Research Grantees
Below are titles and lists of primary researchers for projects that will be funded through the UC Berkeley Center for Long-Term Cybersecurity’s 2021 research grants.
A Comprehensive Investigation of Developers’ Remediation Practices
Noura Alomar, PhD Student, EECS, UC Berkeley; Primal Wijesekera, Staff Research Scientist, EECS, UC Berkeley
Security vulnerabilities pose a grave danger to the integrity of any system because they can undermine almost any protection mechanism organizations put in place to defend themselves against potential attacks. As such, finding vulnerabilities before the software gets deployed or after putting software in production is a critical task in the software development lifecycle. However, not having robust vulnerability remediation processes tailored to addressing identified vulnerabilities might leave organizations vulnerable to attacks that have already been uncovered as part of their vulnerability detection activities. In a previous CLTC-funded project, “Hackers vs. Testers: Understanding Software Vulnerability Discovery Processes,” which focused on obtaining an improved understanding of organizations’ vulnerability management processes, one of our key findings was that organizations struggle with vulnerability remediation. We plan to continue this line of work by conducting a qualitative study that focuses on vulnerability remediation processes followed by organizations. We also want to understand the remediation processes after organizations are notified of privacy-related issues. We believe remediating privacy issues carries equal importance to security issues, and must be addressed to make the app ecosystem a safe place. There is a rich literature on finding privacy violations and understanding how users perceive privacy, especially in the mobile ecosystem. However, the literature on how to help developers make their code compliant with privacy regulations is sparse. We believe that given the rising emphasis on privacy regulations and compliance, it is imperative to understand how developers react and remediate privacy violations to comply with new privacy regulations.
Are Password Managers Improving our Password Habits?
David Ng, Graduate Student, School of Information, UC Berkeley; Cristian Bravo-Lillo, Lecturer, School of Information, UC Berkeley; Jacky Ho, Graduate Student, School of Information, UC Berkeley; Christian Hercules, Graduate Student, School of Information, UC Berkeley; Stuart Schechter, Lecturer, School of Information, UC Berkeley
Adoption of password managers is becoming the norm these days, but are they also encouraging best practices for its users? Do password managers use complex and unique passwords, or do they just store weak passwords? We discovered that many users ignore password reset notifications. We are motivated to find a path to incentivize users to reset their passwords and utilize their password managers as they were intended.
Assessing and Developing Online Election Information Infrastructure
Emma Lurie, PhD Student, School of Information, UC Berkeley
In the United States, people are increasingly turning to online sources to find information about elections. Election information includes everything from mail-in ballot instructions to candidate Facebook page posts. In the US, as well as around the world, online misinformation threatens democratic systems. Politicians, technology companies, journalists, and voters all understand the importance of high-quality online information to fair and trustworthy democratic processes. This project defines a strong online election information infrastructure as one that is robust to malevolent actors and enables constituents to easily identify important information. This project acknowledges that the current online election infrastructure is intricately related to technology platforms (e.g. social media sites and search engines).
While there are ongoing efforts to understand problems in parts of this infrastructure, such as misinformation on Facebook or Google’s alleged partisan bias, these piecemeal approaches are missing an understanding of the broader ecosystem. Complementing research that shifts the focus from individual pieces of disinformation to the disinformation ecosystem, this project aims to shift the subject of research from a particular technology platform to the broader online election information infrastructure, and will look to apply cybersecurity governance strategies, especially abusability testing, to online election information infrastructure.
Evaluating Equity and Bias in Cybersecurity-related Job Descriptions and the Impact on the Cyber Talent Pipeline
Mehtab Khan, JSD Candidate, School of Law, UC Berkeley
Cybersecurity workers are in high demand but short supply. During the COVID-19 crisis, we have seen a greater need for cybersecurity professionals as e-commerce has skyrocketed, universities have shifted online, and millions of Americans are working from home on personal networks. There are also significant diversity challenges to the cybersecurity talent pool since women represent only 11-24% of the total workforce. Every day, we read about another company’s data being breached. These attacks outpace defense mechanisms, and one reason for this is the lack of a competent cybersecurity workforce. The cybersecurity workforce shortfall remains a critical vulnerability for companies and nations. Conventional education and policies cannot meet the demand, and we need new solutions for how to create awareness and identify, develop, and train talent.
Our project is an exploration of the role of job descriptions and hiring policies in signaling the relevant skills for a diverse and competent cybersecurity workforce. Using an experimental natural language processing technology, we will compare traditional cyber job descriptions that use mandatory degree requirements against an adapted job description (removing mandatory degree requirements and creating a skills-based fingerprint). We will use a mixed-method approach to collect qualitative and quantitative data from a diverse set of undergraduate and graduate students at UC Berkeley to evaluate the likelihood of (a) relevance of job posting and (b) likelihood of being a successful candidate from the student and employer perspective. We will analyze the findings with a focus on the interplay of job descriptions with automation, hiring practices, and anti-discrimination laws.
Evaluating the Digital Divide in the Usability of Privacy and Security Settings in Smartphones
Joanne Ma, Graduate Student, School of Information, UC Berkeley; Alisa Frik, Postdoctoral Researcher, EECS, UC Berkeley
With the smartphone penetration rate reaching over 80% in the US, smartphone settings remain one of the main models for information privacy and security controls. Yet the usability of these settings is largely understudied, especially with respect to the usability impact on underrepresented socio-economic and low-tech groups. In this project, we will estimate the gap in comprehension of and familiarity with privacy and security settings; analyze users’ ability to configure those settings; identify common usability issues; and evaluate the privacy and security threat models against which the privacy and security settings are supposed to protect users, and their respective effectiveness. We will compare the findings across various socio-economic groups of participants to draw conclusions about what groups are especially vulnerable to the identified issues.
Investigating the Compliance of Android App Developers with the California Consumer Privacy Act (CCPA)
Nikita Samarin, PhD Student, EECS, UC Berkeley; Primal Wijesekera, Staff Research Scientist, EECS, UC Berkeley; Jordan Fischer, Professor of Law and Lecturer, Drexel University School of Law and UC Berkeley School of Information
The United States lacks a comprehensive federal privacy regulation and instead relies on industry-specific or state-specific discrete privacy laws. On the state level, the California Consumer Privacy Act (CCPA)—which came into effect on January 1, 2020, and became enforceable on July 1, 2020—was enacted to provide enhanced privacy protections and rights for California residents. Our proposed project aims to investigate the extent to which Android app developers comply with the provisions of the CCPA that require them to provide consumers with accurate privacy notices and respond to consumers’ “request to know” by disclosing personal information that they have collected, used, or shared about them for a business or commercial purpose. In doing so, we aim to answer two fundamental questions regarding the efficacy of CCPA in enhancing privacy protections for California residents with respect to personal information collected by mobile app developers. First, is the information provided by developers in response to “right to know” requests complete and accurate, and does the response accurately explain how this data has been collected, used, and shared? Second, are consumers able to successfully request, obtain, and interpret the information provided by the app developers in response to a “right to know” request? The results of this work will be of particular interest to policymakers and regulators on both the state and the federal levels, as well as outside of the US, who are currently enacting or considering passing similar privacy regulations in their jurisdictions.
Misinformation Corrections
Award Winner: 2021 Cal Cybersecurity Research Fellowship
Ji Su Yoo, PhD Student, School of Information, UC Berkeley
Misinformation and disinformation campaigns often rely on bots and fake accounts to establish credibility by impersonating human users with similar demographic characteristics, political beliefs, and social values as their audience. Such nefarious efforts are successful because human beliefs and behaviors about new information are based on the identity of the messenger and conduit of transmission, not just the contents of the message. The uncorrected and unmitigated spread of misinformation is a widespread security concern. When we interpret online information, we do not have credible and standardized avenues to check the identity, credibility, and authenticity of the sources.
In this project, we investigate the social psychological aspects of social framings that can lead to affective polarization about misinformation on social media. Prior research on misinformation has focused on the role of experts or anonymous identities on social media. However, the efficacy of exactly who is delivering the corrections is relatively understudied. We focus on two social psychological concepts, social identity and group status characteristics, to decipher how different social framings are associated with informational interpretations and subsequent corrections. Our strategic experiments concretely inform who should deliver misinformation corrections. Importantly, our approach helps us understand how audiences that traditionally harbor anti-institutional and anti-establishment sentiments will receive misinformation corrections depending on the message and the identity of the messenger. Our project directly aligns with and contributes to the CLTC mission to advance our understanding of socio-technical security environments, especially as it relates to the intersection of health information and civic engagement.
Privacy Controls for Always-Listening Devices
Nathan Malkin, PhD Student, EECS, UC Berkeley
Intelligent voice assistants and the microphone-equipped Internet of Things devices that support them are very convenient, but carry significant privacy risks. Newer and future devices extend these risks by listening all the time — beyond a few specific keywords. The goal of our research is to develop privacy controls for such devices, by allowing users to specify restrictions — what should the assistant be able to hear and what is off-limits? — and for our system to be able to enforce their preferences. So far in our research, we investigated people’s expectations for these devices, developed potential privacy-preserving approaches, and prototyped transparency mechanisms. In the next phase of the project, we propose an in situ study of privacy controls for passive listening devices, evaluating their effectiveness and usability across several different dimensions and criteria.
Reverse Engineer and Counter Adversarial Attacks with Unsupervised Representation Learning
Xudong Wang, PhD Student, EECS, UC Berkeley; Nils Worzyk, Postdoctoral Researcher, EECS, UC Berkeley
Computer vision has been integrated into many areas of our lives, including facial recognition, augmented reality, autonomous driving, and healthcare. However, making these technologies more accurate and generalizing to real-world data alone is no longer sufficient; we have to safe-guard their robustness against malicious attacks in cyberspace. Compared with the supervised learning that aims to learn a function that, given a sample of data and semantic labels, best approximates the relationship between input and output observable in the data, unsupervised learning infers the natural structure present within a set of data points without any manual labeling. Therefore, to handle increasingly more unlabeled data, unsupervised learning has been widely adopted. However, while unsupervised training delivers more generalizing performance than supervised training, when optimized to tackle a specific task like image classification, unsupervised trained models are actually more vulnerable to adversarial attacks.
We propose to build upon recent adversarial training on unsupervised learning and advance its adversarial robustness. In addition, we aim to detect adversarial inputs in the unsupervised trained feature space and reverse engineer the initially applied perturbation. These perturbations are then used to identify single or clusters of different attacks. However, just making the model more robust will sacrifice the performance of the unsupervised learning model in downstream tasks. As in supervised learning, even with a strengthened model, new attacks seem to fool it. Therefore, it is necessary to identify ways to restore the original correct class of detected adversarial inputs. By reverse engineering the initially applied perturbation, it might also become possible to identify single or clusters of adversarial attacks, potentially related to different groups of attackers. We aim to develop a novel unsupervised learning method that can obtain powerful generalizable representations without any semantic labels, so as to better protect privacy and resist malicious cyber-attacks while making full use of large amounts of unlabeled data.
Robust Machine Learning via Random Transformation
Chawin Sitawarin, PhD Student, EECS, UC Berkeley
Current machine learning models suffer from evasion attacks such as adversarial examples. This introduces security and safety concerns that lack any clear solution. Recently, the usage of random transformations has emerged as a promising defense against the attack. Here, we hope to extend this general idea to build a defense that is secure, difficult to break even by strong adversaries, and efficient to deploy in practice. Additionally, insights gained from this work will broadly benefit scientific communities that study stochastic neural networks and robustness properties.
Towards Bayesian Classifiers that are Robust Against Adversarial Attacks
An Ju, PhD Student, EECS, UC Berkeley
We aim to build neural networks that are intrinsically robust against adversarial attacks. We focus on classifying images in real-world scenarios with complex backgrounds under unforeseen adversarial attacks. Previous defenses lack interpretability and have limited robustness against unforeseen attacks, failing to deliver trustworthiness to users. We will study Bayesian models, which are more interpretable and have intrinsic robustness. We will explore two directions: extending an existing Bayesian classifier with better models and building new Bayesian models from discriminative models.
Projects Jointly Funded with the Center for Technology, Society & Policy
Activism Always: A Student Initiative for Data in the Social Impact Sector
Mikayla O’Reggio, Undergraduate Student, College of Natural Resources, UC Berkeley; Chelsie Lui, Undergraduate Student, California Polytechnic State University, San Luis Obispo; Hoa Nguyen, Masters Student, San Francisco State University; Jin Pu, Masters Student, Columbia University
Examining the Landscape of Digital Security and Privacy Assistance for Racial Minority Groups
Nikita Samarin, PhD Student, EECS, UC Berkeley; Moses Namara, PhD Student, Clemson University; Joanne Ma, Masters Student, School of Information, UC Berkeley; Aparna (Abby) Krishnan, Undergraduate Student, University of Texas at Austin
Leveraging the Communicative, Social, and Health Benefits of Drumming in Early Childhood
Jeremy Gordon, PhD Student. School of Information, UC Berkeley; Jon Gillick, PhD Student, School of Information, UC Berkeley; Pierre-Valery Tchetgen, Research Specialist, 21CSLA State Center, UC Berkeley