News / June 2024

UC Berkeley Team Funded to Enter DARPA Competition

AI-X-CC competition logo

A team of UC Berkeley researchers will receive $5000 in funding to enter the Artificial Intelligence Cyber Challenge (AIxCC), a competition sponsored by the Defense Advanced Research Projects Agency (DARPA), a research and development agency of the U.S. Department of Defense. The award is made possible thanks to an anonymous donor.

AIxCC is a “two-year competition that brings together the best and brightest in AI and cybersecurity to safeguard the software critical to all Americans,” according to the event’s website. Competitors are asked to “design novel AI systems to secure this critical code,” with a total of $29.5 million in prizes to teams with the best systems. AIxCC will consist of two competitions: the AIxCC Semifinal Competition (ASC), to be held in August 2024, and the AIxCC Final Competition (AFC), to be held in August 2025.

The UC Berkeley team’s project,  “A CodeLM Automated Repair Program with Analysis, Planning, and Control,” proposes an automated, AI-based code repair solution that combines both vulnerability detection and patch generation. “Our approach will deploy a sophisticated machine learning framework that integrates static and dynamic code analysis to increase the accuracy and reliability of vulnerability detection,” the researchers explained in their proposal. The abstract for the project is below.

The team includes Samuel Berston, a student in the UC Berkeley School of Information’s Master of Information and Cybersecurity (MICS) program; Marlon Fu, a student in the Master of Information and Data Science (MIDS) program; Marsalis Gibson, a PhD student in the UC Berkeley Department of Electrical Engineering and Computer Science (EECS); and MICS students Katelynn Hernandez, Gerald Musumba, Narayanan Potti, Ansuv Sikka, and Lawrence Wagner.

Abstract

Current implementations of vulnerability detection and automated code repairs have been beneficial to corporations and governments that develop applications that may be susceptible to vulnerabilities. Even though learning-based solutions have exceeded the current State-of-the-Art automated repair methods, these systems still suffer from low fault detection accuracy the “overfitting problem,” and computational inefficiency. Therefore, to address these problems, we propose to build an automated repair program that generates repairs in 2 steps, where the first step, Vulnerability Detection, identifies the vulnerability’s location; and the second step, Patch Generation, finds a patch that adequately fixes the vulnerability while maintaining its original functionality, style, and readability. Specifically, vulnerabilities will be detected using a model that processes static and dynamic information for context, while patches are produced by utilizing one Code LM to produce patching plans in the form of instructions then using another Code LM to follow the instructions and execute code changes. To optimize the system for competition, we will evaluate our system using metrics on both vulnerability detection and patching that reflect accuracy, effectiveness, acceptability, and code size. We finally evaluate any risks that may pose within our design, along with identifying mitigation strategies that may resolve these issues.