CLTC has launched a new series of “explainer videos” to break down complex cybersecurity-related topics for a lay audience. The first of these videos focuses on “adversarial machine learning,” when AI systems can be deceived (by attackers or “adversaries”) into making incorrect assessments. An adversarial attack might entail presenting a machine-learning model with inaccurate or misrepresentative data as it is training, or introducing maliciously designed data to deceive an already trained model into making errors.
“Machine learning has great power and promise to make our lives better in a lot of ways, but it introduces a new risk that wasn’t previously present, and we don’t have a handle on that,” says David Wagner, Professor of Computer Science at the University of California, Berkeley.
CLTC has written a brief overview of adversarial machine learning for policymakers, business leaders, and other stakeholders who may be involved in the development of machine learning systems, but who may not be aware of the potential for these systems to be manipulated or corrupted. The article also includes a list of additional resources.