Interpreting Adversarial Examples in Deep Learning: A Review

37Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep learning technology is increasingly being applied in safety-critical scenarios but has recently been found to be susceptible to imperceptible adversarial perturbations. This raises a serious concern regarding the adversarial robustness of deep neural network (DNN)-based applications. Accordingly, various adversarial attacks and defense approaches have been proposed. However, current studies implement different types of attacks and defenses with certain assumptions. There is still a lack of full theoretical understanding and interpretation of adversarial examples. Instead of reviewing technical progress in adversarial attacks and defenses, this article presents a framework consisting of three perspectives to discuss recent works focusing on theoretically explaining adversarial examples comprehensively. In each perspective, various hypotheses are further categorized and summarized into several subcategories and introduced systematically. To the best of our knowledge, this study is the first to concentrate on surveying existing research on adversarial examples and adversarial robustness from the interpretability perspective. By drawing on the reviewed literature, this survey characterizes current problems and challenges that need to be addressed and highlights potential future research directions to further investigate adversarial examples.

Cite

CITATION STYLE

APA

Han, S., Lin, C., Shen, C., Wang, Q., & Guan, X. (2023). Interpreting Adversarial Examples in Deep Learning: A Review. ACM Computing Surveys, 55(14). https://doi.org/10.1145/3594869

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free