Grant / January 2020

Secure Machine Learning for Adversarial Environments

We plan to build a pipeline that leverages novel robust secure machine learning techniques to detect and defeat cybersecurity threats against computer systems. Modern cyber-threats to computer systems and constitutes a game-theoretic arms race in which sophisticated, well funded attackers evolve to evade detection and detection mechanisms react. We plan to analyze the effects of game theory on a robust secure machine learning system, specifically where adversaries are attempting to evade or mislead the machine learning system, and develop novel techniques for making the system robust against such attacks. We have received large datasets for our study from several prominent commercial providers.