Event Recap / November 2020

CLTC Research Exchange on Protecting Vulnerable Populations


On November 12, the Center for Long-Term Cybersecurity presented the second event in our 2020 Research Exchange Series, a showcase of the work of CLTC-affiliated researchers who are pushing the boundaries of technology, social science, and the humanities to positively influence how individuals, organizations, and governments think about and deal with cybersecurity issues.

The theme of this half-day conference, “Protecting and Securing a More Inclusive Society Online,” centered on how technology can be designed and deployed to make cybersecurity more accessible to underserved populations who may be particularly vulnerable to digital security threats, and who may have less technical training or resources.

The online event featured a combination of 15-minute presentations from UC Berkeley researchers who received funding through the CLTC Grants Program in 2019, as well as five-minute “lightning talks,” in which 2020 grantees provided previews of their in-progress work.

“We’re about trying to look over the horizon to see what kinds of things we ought to be doing now so we don’t get caught by surprise,” said Steven Weber, CLTC Faculty Director, in his opening remarks, “and ultimately amplify the upside of the digital revolution by making digital systems more secure — and fundamentally more trustworthy.”

Detecting Phone Phishing

In their presentation,”Sounds Phishy: Protecting Consumers Against Phone Phishing,” Michelle Chen and Ashish Sur — alumni from the Master of Information Management and Systems program at the School of Information — provided an overview of their research on real-time detection of “phone phishing,” in which scammers use social engineering to fraudulently access personal and financial information.

Typical Scam Call ProcessThe researchers conducted surveys and interviews to understand how phone scans work, including how scammers use “spoof” phone numbers to trick victims into accepting calls, then convey authority and urgency to get victims to comply. “It was really interesting to hear that people seem to be overconfident in their ability to overcome the phone phishing scam before they actually experience it for themselves,” Chen explained.

Noting that there is currently no system available that alerts users to a possible scam during a call, the researchers used natural language processing (NLP) to develop a phone scam detection application that analyzes conversations in real time, then warns users if fraud is detected. “We believe that our work uncovered better understanding for addressing all the challenges in fighting phone scams against consumers,” Sur explained.

Perspectives on mHealth Privacy

In her five-minute lightning talk, Laura Gomez-Pathak, a PhD student in the UC Berkeley School of Social Welfare, presented initial findings from her research, “Low-income Patients’ Perspectives on Health Data Privacy and Security.” Gomez-Pathak is using survey-based research to understand vulnerable patients’ perspectives on privacy and security in the use of mobile health (mHealth) applications. 

“Research on the privacy and security knowledge, attitudes, and apprehensions of users from low-income and ethnic minority backgrounds is especially limited, which motivated me to conduct this exploratory research study,” Gomez-Pathak explained. 

Gomez-Pathak is conducting semi-structured interviews with patients who are part of a large safety-net healthcare system in the San Francisco Bay Area. Her preliminary findings indicate that, the more tech-savvy users are, the less concerned about the privacy of mHealth apps they tend to be. “There seems to be a correlation between tech literacy levels and the amount of privacy concerns patients have,” Gomez-Pathak said. “Another theme that has emerged is around location data tracking. Some people think it can actually lead to profiling.”

Enabling Online Anonymity

In his Research Exchange presentation, Venkatachalam Anantharam, Professor of Electrical Engineering in the Department of Electrical Engineering and Computer Science (EECS) at UC Berkeley, presented “Enabling Online Anonymity for Vulnerable Individuals and Organizations.”

Defining anonymity in a networkAnantharam explained that commonly used tools for remaining anonymous online, such as Tor, do not entirely preserve privacy as an observer could potentially identify when packets of data are sent and received. His work focuses on advancing a peer-to-peer keying alternative to public key-based mixing, and finding ways to make this kind of observation impossible while minimizing latency.

“In today’s world, there are many societies where people are not as free as they are in the United States, or in the Western world in general,” Anantharam said. “Privacy is better preserved by enabling your communications to be anonymous.”

Privacy Controls for Always-On Devices

Nathan Malkin, a PhD student in the computer science department, presented a lightning talk on “Privacy Controls for Always-Listening Devices,” focused on AI-based devices like Alexa and Google Home that are designed to listen and respond to human speech. “We think that in the future, we’ll be saying ‘Hey, Alexa’ a lot less, but the devices will be doing a lot more listening,” Malkin explained. “The natural progression is for these devices to start listening all the time.” 

Privacy Controls for Always-On DevicesMalkin’s work is focused on developing a permission system that would allow users to more explicitly designate what these devices can (or cannot) hear. Following an initial survey to understand users’ perceptions of privacy around the devices, Malkin developed a “new transparency mechanism, a way for users to understand what an always-listening device would hear in various circumstances. Then we tested whether regular people could use this tool to detect if a malicious service was spying on them.”

“We’re running studies to investigate the setup experience, as well as trying to design prototypes that we can give to people in order to better understand what it’s like to be living with one of these devices,” Malkin explained. “We’re really excited about this work and are looking forward to sharing more when we have results.”

Privacy and Security Settings in Smart Phones

Presenting on a related topic, Alisa Frik, a postdoctoral researcher at the UC Berkeley International Computer Science Institute (ICSI), introduced her research on the usability of privacy and security settings in smartphones. 

Privacy and Security Settings in Smart Phones“People with limited technological literacy and experience may find it hard to make informed decision about those settings,” Frik explained. “The levels of literacy and experience are unevenly distributed across various populations, and so it may disproportionately affect certain vulnerable populations, such as older adults, lower-income people, and people of color. The goals of this project are to understand the comprehension gaps, and whether people understand the language and jargon that’s been developed by technologists.”

Frik’s research exposed that many users of mobile devices have little awareness of how they can change their privacy and security settings. “Only 20% of iOS users were aware of the existing settings that allow them to erase data from the phone after 10 failed passcode attempts,” Frik said. “Less than half of iOS and Android users were aware of their ability to opt in and out of storing audio recordings of interactions with voice assistants, Siri or Google Assistant respectively. They were also not aware of how to opt out of personalization of passwords that were used in their default mobile browsers.”

“The main takeaways from this work is that, while participants were worried about online risks, they were not aware about many privacy and security settings that would help to address those risks,” Frik said. “Many held incorrect beliefs about the default settings, and many respondents haven’t configured — but would like to configure — these settings.”

Expanding Diverse Talent in the Cybersecurity Workforce

The November CLTC Research Exchange concluded with a conversation on “Expanding Diverse Talent in the Cybersecurity Workforce,” featuring Sandra Wheatley Smerdon, SVP, Marketing, Threat Intelligence and Influencer Communications at Fortinet, an enterprise security firm, and Lisa Parcella, VP Product Management & Marketing for Security Innovation and a member of the International Consortium of Minority Cybersecurity Professionals (ICMCP).

Ann Cleaveland, Lisa Parcella, and Sandra Wheatley Smerdon“Both [panelists] have not only had distinguished careers in cybersecurity, but they’re also advocates for representation of women, veterans, and people of color in cybersecurity,” said CLTC Executive Director Ann Cleaveland, who moderated the conversation.

Parcella said that diversity remains an issue in cybersecurity, but she has observed a “proliferation of cybersecurity communities that are promoting inclusion, initiating conversations, and creating access to content and programs to fill the general cybersecurity gap, but specific to different ethnically and racially diverse communities.”

“We’re seeing progress,” agreed Smerdon. “It may not be at the rate we’d like to see, but it’s definitely happening…. When I go to cybersecurity conferences, I see a lot more women than in the beginning. We do know that there are more female Millennials coming into the industry. And the good news is, is that they’re tending to be more rapidly move into higher-level jobs. But the number of Blacks and Hispanics in cybersecurity continues to be extremely low.

“There are a lot of organizations realizing that public and private partnerships have to come together on this issue,” Smerdon said. “People are hungry for more education and curriculum training around this industry, and I think working together, we can really make inroads.”

Parcella noted the importance of hiring from within in helping more diverse professionals enter the cybersecurity field. She stressed the importance of “identifying folks who have an interest in the work, giving them a channel to explore that interest, and potentially pivot their existing career paths. That can be done through training programs, encouraging diversity, and mentorship.”

Watch the full video of the Research Exchange above or on YouTube.

The third and final event of the 2020 CLTC Research Exchange series, “Long-Term Security Implications of AI/ML systems,” will take place on December 10. Register here to attend.