AdVulCode: Generating Adversarial Vulnerable Code against Deep Learning-Based Vulnerability Detectors

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Deep learning-based vulnerability detection models have received widespread attention; however, these models are susceptible to adversarial attack, and adversarial examples are a primary research direction to improve the robustness of the models. There are three main categories of adversarial example generation methods for source code tasks: changing identifier names, adding dead code, and changing code structure. However, these methods cannot be directly applied to vulnerability detection. Therefore, we propose the first study of adversarial attack on vulnerability detection models. Specifically, we utilize equivalent transformations to generate candidate statements and introduce an improved Monte Carlo tree search algorithm to guide the selection of candidate statements to generate adversarial examples. In addition, we devise a black-box approach that can be applied to widespread vulnerability detection models. The experimental results show that our approach achieves attack success rates of 16.48%, 27.92%, and 65.20%, respectively, in three vulnerability detection models with different levels of granularity. Compared with the state-of-the-art source code attack method ALERT, our method can handle models with identifier name mapping, and our attack success rate is 27.59% higher on average than ALERT.

Cite

CITATION STYLE

APA

Yu, X., Li, Z., Huang, X., & Zhao, S. (2023). AdVulCode: Generating Adversarial Vulnerable Code against Deep Learning-Based Vulnerability Detectors. Electronics (Switzerland), 12(4). https://doi.org/10.3390/electronics12040936

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free