January 25, 2022

Robust Object Classification via Part-Based Models

Robustness becomes one of the most desired properties in machine learning (ML) models due to their increasing adoption in safety/security-sensitive settings. Most attempts to train robust methods against adversarial manipulation rely on expensive robust optimization and a large amount of data. As a result, they are difficult to scale and…

January 27, 2021

Towards Bayesian Classifiers that are Robust Against Adversarial Attacks

We aim to build neural networks that are intrinsically robust against adversarial attacks. We focus on classifying images in real-world scenarios with complex backgrounds under unforeseen adversarial attacks. Previous defenses lack interpretability and have limited robustness against unforeseen attacks, failing to deliver trustworthiness to users. We will study Bayesian models,…

January 14, 2020

Novel Metrics for Robust Machine Learning

Although deep neural networks (DNNs) have achieved impressive performance in several applications, they also exhibit several well-known sensitivities and security concerns that can emerge for a variety of reasons, including adversarial attacks, backdoor attacks, and lack of fairness in classification. Hence, it is important to better understand these risks in…

January 14, 2020

Adversarially Robust Machine Learning

Machine learning provides valuable methodologies for detecting and protecting against security attacks at scale. However, a machine-learning algorithm used for security is different from other domains because in a security setting, an adversary will try to adapt his behavior to avoid detection. This research team will explore methodologies for improving…

January 27, 2021

Robust Machine Learning via Random Transformation

Current machine learning models suffer from evasion attacks such as adversarial examples. This introduces security and safety concerns that lack any clear solution. Recently, the usage of random transformations has emerged as a promising defense against the attack. Here, we hope to extend this general idea to build a defense…

January 14, 2020

Robust Access in Hostile Networks

Our research is about providing safe access to the Internet in places where network access is restricted or censored. Many people are limited in what they can say and do online because of restrictive filters that block websites. These filters also put people at risk of surveillance and infection by…

January 25, 2021

Center for Long-Term Cybersecurity 2021 Research Grantees

The UC Berkeley Center for Long-Term Cybersecurity (CLTC) is proud to announce the recipients of our 2021 research grants. In total, 14 different student-led research groups have been awarded grants to support initiatives related to digital security issues emerging at the intersection of technology and society. Three of the projects…

January 25, 2022

Center for Long-Term Cybersecurity 2022 Research Grantees

The UC Berkeley Center for Long-Term Cybersecurity (CLTC) is proud to announce the recipients of our 2022 research grants. In total, 11 different student-led research groups have been awarded grants to support initiatives related to digital security issues emerging at the intersection of technology and society.  The 2022 grants will…

January 23, 2019

Center for Long-Term Cybersecurity Announces 2019 Research Grantees

The UC Berkeley Center for Long-Term Cybersecurity (CLTC) is proud to announce the recipients of our 2019 research grants. In total, 30 different groups of researchers will share a total of roughly $1.3 million in funding to support a broad range of initiatives related to cybersecurity and digital security issues…

February 3, 2016

CLTC Announces $900,000 in Inaugural Research Grants

How can organizations better detect spear-phishing cyberattacks? How could neural signals be used as a method of online authentication? How effective are tactics such as financial account closures and asset seizures in deterring cyber criminals? What types of defenses could help protect at-risk activists and NGOs from state-level surveillance? These…