Generating adversarial examples for holding robustness of source code processing models

76Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.

Abstract

Automated processing, analysis, and generation of source code are among the key activities in software and system life-cycle. To this end, while deep learning (DL) exhibits a certain level of capability in handling these tasks, the current state-of-the-art DL models still suffer from non-robust issues and can be easily fooled by adversarial attacks. Different from adversarial attacks for image, audio, and natural languages, the structured nature of programming languages brings new challenges. In this paper, we propose a Metropolis-Hastings sampling-based identifier renaming technique, named Metropolis-Hastings Modifier (MHM), which generates adversarial examples for DL models specialized for source code processing. Our in-depth evaluation on a functionality classification benchmark demonstrates the effectiveness of MHM in generating adversarial examples of source code. The higher robustness and performance enhanced through our adversarial training with MHM further confirms the usefulness of DL models-based method for future fully automated source code processing.

Cite

CITATION STYLE

APA

Zhang, H., Li, Z., Li, G., Ma, L., Liu, Y., & Jin, Z. (2020). Generating adversarial examples for holding robustness of source code processing models. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 1169–1176). AAAI press. https://doi.org/10.1609/aaai.v34i01.5469

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free