News / May 2018

Media Roundup: CLTC in the News, Grantee Updates

Media Coverage: CLTC in the News

New Report: ‘China’s Long Game in Techno-Nationalism’

CLTC Faculty Director Steven Weber and UC Berkeley School of Information Ph.D Student and 2018 CLTC grantee Shazeda Ahmed have released a new report. In “China’s Long Game in Techno-Nationalism”, the authors discuss how the passage of China’s national cybersecurity law in June 2017 has been interpreted as an unprecedented impediment to the operation of foreign firms in the country. Although the law’s scope is indeed broader than that of any previous regulation, the process through which it was drafted (and eventually approved) bears similarities to three previous cases from the past two decades of Chinese information technology policy-making. In comparing these cases, Weber and Ahmed argue that economic concerns have consistently overshadowed claims of national security considerations in laws directed at foreign enterprises. Read the report.

CLTC Grantee Amit Elazari Bar On Pens Piece on Legal Bug Bounties for VICE’s Motherboard

In a recent article for VICE’s Motherboard, CLTC Grantee Amit Elazari Bar On named the regulation of algorithms as one of the most pressing policy concerns in today’s digital society. While regulators, scholars, think-tanks, and policymakers devise better laws to hold the people behind bad algorithms accountable, those who are actually best positioned to uncover the operations of bad algorithms are risking legal liability. “Currently, US anti-hacking laws actually prevent algorithm auditors from discovering what’s inside the black-box and report their findings,” Elazari wrote. “What we need is not just better laws—we need a market that will facilitate a scalable, crowd-based system of auditing to uncover ‘bias’ and ‘deceptive’ bugs that will attract and galvanize a new class of white-hat hackers: algorithmic auditors. They are the immune system for the age of algorithmic decision-making.” Read the story here.

Elazari was also cited in a recent article in Cyber Scoop, “Dropbox Revamps Vulnerability Disclosure Policy, with Hopes That Other Companies Follow Suit,” which reported on DropBox’s recent updates to its vulnerability disclosure policy. The company’s move to cultivate relationships with cybersecurity researchers is a response to “decades of abuse, threats, and bullying” against researchers who find and describe bugs in commercial software. Elazari successfully encouraged the file-sharing company to further expand its policy by including a clause pledging not to bring a copyright infringement claim against good-faith participants in the bug bounty program. Read more.

Steve Weber in Pacific Standard, National Journal

CLTC Faculty Director Steve Weber weighed in on the recent diplomatic summit between North and South Korea—and the implications for the U.S.—in a recent interview for Pacific Standard. When asked if this summit was merely pageantry or a sign of more substantive change, Weber responded that “there may be no agreement that’s possible to reach, but the symbolism says to the people in the capitals, ‘we’re really going to try.’ And that’s not something that anybody predicted.” Read the interview.

In a separate article for the National Journal, Weber discussed the growing balkanization of the tech world and its potential to shape development in the tech and cybersecurity spheres in the U.S. and abroad. The article explains how this trend is largely driven by national security and economic protectionism, and spurred by fears of diminished privacy. “The tech world imagined itself as being American firms playing on a global stage,” said Weber. Read the article.

CLTC and Hewlett Foundation Recognized in Financial Times

A recent article in the Financial Times, “Silicon Valley’s tech billionaires raising funds to fight cyber crime,” lauded the efforts of the Hewlett Foundation, the founding funder of CLTC, to increase philanthropic support for cybersecurity. The piece explains that the purpose of the Hewlett Foundation’s funding of CLTC—along with centers at Stanford and M.I.T.—was to create “multidisciplinary cyber security centres” that would “bring together policy and tech people…. ‘The policy people didn’t know the tech people and vice-versa—they all hate each other,’ says [Hewlett President] Larry Kramer. ‘It is actually a wild west. And the way the west was tamed was the development of institutions. We have to start behaving; we can’t come into the bar drunk and shoot it up.” As an example of the need for philanthropic support for cybersecurity research, the piece highlights CLTC’s 2017 report (and accompanying event) on the cybersecurity of the Olympic Games, and features quotations from CLTC Executive Director Betsy Cooper.  Read the article.

CLTC Grantees in Washington Post‘s “Monkey Cage”

A pair of researchers from American University recently collaborated with CLTC to publish a white paper exploring how cyber operations are likely to be used in a crisis or conflict scenario. To accompany the release of this report, the authors—Benjamin Jensen and David Banks—wrote an analysis that appeared in Washington Post‘s Monkey Cage blog. in their piece, “Cyber Warfare May Be Less Dangerous Than We Think,” Jensen and Banks argue that “although states like Russia will continue to engage in cyberattacks against the foundations of democracy (a serious threat indeed), states are less likely to engage in destructive ‘doomsday’ attacks against each other in cyberspace. Using a series of war games and survey experiments, we found that cyber operations may in fact produce a moderating influence on international crises.” Read the Monkey Cage piece here.

Recent Grantee Updates

Team of CLTC Grantees Release ‘Project rIoT’ Report

As Internet of Things (IoT) devices become more ubiquitous, so does the problem of malicious actors using these inexpensive, vulnerable products to snoop on consumers, cause devices to malfunction, or degrade or deny access to services. Attacks on the availability of information technology—such as DDoS attacks—create unique harms that accrue to both the targets of the attacks and to the consumers whose IoT devices enable the attacks. But what happens to the owners of the devices? What costs do they bear as a result of their devices being hacked? Do their electricity and bandwidth bills increase when their devices are used in attacks? And what can consumers and regulators do to ensure that manufacturers improve device security to prevent these attacks in the future? In a new report “rIoT: Quantifying Consumer Costs of Insecure Internet of Things Devices”, a team of CLTC grantees—including Kim Fong, Kurt Hepler, Rohit Raghavan, and Peter Rowland—aim to answer these questions and identify the costs to consumers in the context of DDoS attacks. The researchers explore potential implications of these issues and discuss regulations that could be used to promote a more secure IoT ecosystem in the future. Read the report.

CLTC Grantee on Cybersecurity Awareness for Vulnerable Populations at SF Tech Council

2018 CLTC grantee Ahmad Sultan recently presented preliminary results and recommendations for his research project “Cyber Security Awareness for the Underserved Population of San Francisco” to the San Francisco Tech Council at NeighborNest. For his project, Ahmad administered in-depth surveys of 160 city residents to explore the nature of cyber crime and cybersecurity skills, particularly among vulnerable populations in San Francisco. For this presentation, Sultan focused on the senior citizen population of his survey sample. Among other findings, Sultan’s results shed light on the impact that cyber crime has on online trust, online confidence, and service use. His preliminary analysis also provides a look into the resources that City residents use to protect themselves online—and the impact that resource type has on cyber security skill outcomes. Ahmad looks forward to presenting more of his research findings during Digital Inclusion Week in May.

New Paper on Extracting Secrets From Deep Learning Models

CLTC grantee recipients Nicholas Carlini, Chang Liu, Dawn Song, along with Jaernej Kos and Úlfar Erlingsson, released a paper revealing how vulnerable deep learning is to information leakage. In “The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets,” the researchers present a simple-to-compute metric that can be applied to any deep learning model for measuring the memorization of secrets. Using this metric, they detail how to extract those secrets efficiently using black-box API access. They go on to show that unintended memorization occurs early, is not due to over-fitting, and is a persistent issue across different types of models, hyperparameters, and training strategies. In a recent article for The Register, co-author and EECS professor Dawn Song said, “We hope to raise awareness that it’s important to consider protecting users’ sensitive data as machine learning models are trained. Machine learning or deep learning models could be remembering sensitive data if no special care is taken.” Read the full report.

Won’t Somebody Think of the Children?

A team of CLTC-affiliated researchers had their study, “Won’t Somebody Think of the Children?’ Examining COPPA Compliance at Scale,” published in the journal Proceedings on Privacy Enhancing Technologies. For the paper, Irwin Reyes, Primal Wijesekera, Joel Reardon, Amit Elazari Bar On, Abbas Razaghpanah, Narseo Vallina-Rodriguez, and Serge Egelman analyzed nearly 6,000 child-directed Android apps and found that well over half of the apps potentially violated the US Children’s Online Privacy Protection Act. Read the paper here.

Blockchains and Cryptocurrencies

On April 20, Nicholas Weaver, staff researcher with the International Computer Science Institute (ICSI) and former CLTC grantee, delivered a special lecture entitled, “Blockchains and Cryptocurrencies: Burn It With Fire.” In his presentation, Weaver gave his take on the current ecology of cryptocurrency and blockchain, along and offered suggestions for targeted responses that governments and other actors can implement to disrupt this space on a global basis. Watch the presentation.