Announcement / April 2018

RSVP for 4/26 Seminar with Doug Tygar, “Adversarial Machine Learning”

Thursday, 4/26, 12-1pm

Adversarial Machine Learning

Please join us on Thursday, April 26 at 12pm for the third and final event in the CLTC Spring 2018 Seminar Series. This seminar will feature Doug Tygar, Professor of Computer Science and Professor of Information Management at UC Berkeley. A light lunch will be available for those who RSVP.

Location: South Hall, Room 205, UC Berkeley Campus (map). 

RSVP here

Abstract

Machine learning would seem to be a powerful technology for Internet computer security. If machines can learn when a system is functioning normally and when it is under attack, then we can build mechanisms that automatically and rapidly respond to emerging attacks. Such a system might be able to automatically screen out a wide variety of spam, phishing, network intrusions, malware, and other nasty Internet behavior. But the actual deployment of machine learning in computer security has been less successful than we might hope. What accounts for the difference? To understand the issues, let’s look more closely at what happens when we use machine learning. In one popular model, supervised learning, we train a system using labeled data to produce a classifier. While standard machine learning algorithms are robust against input data with errors from random distributions, it turns out that they are vulnerable to errors that are strategically chosen by an adversary. In this talk, I will demonstrate a number of methods that adversaries can use to corrupt machine learning.

My colleagues and I at UC Berkeley — as well as other research teams around the world — have been looking at these problems and developing new machine learning algorithms that are robust against adversarial input. The search for adversarial machine learning algorithms is thrilling: it combines the best work in robust statistics, machine learning, and computer security. One significant tool security researchers use is the ability to look at attack scenarios from the adversary’s perspective (the black hat approach), and in that way, show the limits of computer security techniques. In the field of adversarial machine learning, this approach yields fundamental insights. Even though a growing number of adversarial machine learning algorithms are available, the black hat approach shows us that there are some theoretical limits to their effectiveness.

This talk discusses joint work with Anthony Joseph and other members of the SecML research group at UC Berkeley.

About the Speaker

Doug Tygar is Professor of Computer Science at UC Berkeley and also a Professor of Information Management at UC Berkeley. He works in the areas of computer security, privacy, and electronic commerce. His current research includes privacy, security issues in sensor webs, digital rights management, and usable computer security. His awards include a National Science Foundation Presidential Young Investigator Award, an Okawa Foundation Fellowship, a teaching award from Carnegie Mellon, and invited keynote addresses at PODC, PODS, VLDB, and many other conferences.

Tygar has written three books; his book Secure Broadcast Communication in Wired and Wireless Networks (with Adrian Perrig) is a standard reference and has been translated to Japanese. He designed cryptographic postage standards for the US Postal Service and has helped build a number of security and electronic commerce systems including:  Strongbox, Dyad, Netbill, and Micro-Tesla.  He served as chair of the Defense Department’s ISAT Study Group on Security with Privacy, and he was a founding board member of ACM’s Special Interest Group on Electronic Commerce.