January 27, 2021

Towards Bayesian Classifiers that are Robust Against Adversarial Attacks

We aim to build neural networks that are intrinsically robust against adversarial attacks. We focus on classifying images in real-world scenarios with complex backgrounds under unforeseen adversarial attacks. Previous defenses lack interpretability and have limited robustness against unforeseen attacks, failing to deliver trustworthiness to users. We will study Bayesian models,…

January 14, 2020

Novel Metrics for Robust Machine Learning

Although deep neural networks (DNNs) have achieved impressive performance in several applications, they also exhibit several well-known sensitivities and security concerns that can emerge for a variety of reasons, including adversarial attacks, backdoor attacks, and lack of fairness in classification. Hence, it is important to better understand these risks in…

January 14, 2020

Adversarially Robust Machine Learning

Machine learning provides valuable methodologies for detecting and protecting against security attacks at scale. However, a machine-learning algorithm used for security is different from other domains because in a security setting, an adversary will try to adapt his behavior to avoid detection. This research team will explore methodologies for improving…

January 27, 2021

Robust Machine Learning via Random Transformation

Current machine learning models suffer from evasion attacks such as adversarial examples. This introduces security and safety concerns that lack any clear solution. Recently, the usage of random transformations has emerged as a promising defense against the attack. Here, we hope to extend this general idea to build a defense…

January 14, 2020

Robust Access in Hostile Networks

Our research is about providing safe access to the Internet in places where network access is restricted or censored. Many people are limited in what they can say and do online because of restrictive filters that block websites. These filters also put people at risk of surveillance and infection by…

January 25, 2021

Center for Long-Term Cybersecurity 2021 Research Grantees

The UC Berkeley Center for Long-Term Cybersecurity (CLTC) is proud to announce the recipients of our 2021 research grants. In total, 14 different student-led research groups have been awarded grants to support initiatives related to digital security issues emerging at the intersection of technology and society. Three of the projects…

January 23, 2019

Center for Long-Term Cybersecurity Announces 2019 Research Grantees

The UC Berkeley Center for Long-Term Cybersecurity (CLTC) is proud to announce the recipients of our 2019 research grants. In total, 30 different groups of researchers will share a total of roughly $1.3 million in funding to support a broad range of initiatives related to cybersecurity and digital security issues…

January 14, 2020

Center for Long-Term Cybersecurity 2020 Research Grantees

The UC Berkeley Center for Long-Term Cybersecurity (CLTC) is proud to announce the recipients of our 2020 research grants. In total, 22 different groups of researchers will share nearly $1 million in funding to support a broad range of initiatives addressing cybersecurity and digital security issues at the intersection of…

February 3, 2016

CLTC Announces $900,000 in Inaugural Research Grants

How can organizations better detect spear-phishing cyberattacks? How could neural signals be used as a method of online authentication? How effective are tactics such as financial account closures and asset seizures in deterring cyber criminals? What types of defenses could help protect at-risk activists and NGOs from state-level surveillance? These…

February 8, 2017

Center for Long-Term Cybersecurity Announces 2017 Research Grantees

The Center for Long-Term Cybersecurity (CLTC) is pleased to announce the recipients of our 2017 research grants. In total, 28 different groups of researchers will share a total of over $1 million in funding. The projects span a wide range of topics related to cybersecurity, including new methods for making…