Adversarial example defense based on image reconstruction

10Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The rapid development of deep neural networks (DNN) has promoted the widespread application of image recognition, natural language processing, and autonomous driv-ing. However, DNN is vulnerable to adversarial examples, such as an input sample withimperceptible perturbation which can easily invalidate the DNN and even deliberatelymodify the classification results. Therefore, this article proposes a preprocessing defenseframework based on image compression reconstruction to achieve adversarial exampledefense. Firstly, the defense framework performs pixel depth compression on the inputimage based on the sensitivity of the adversarial example to eliminate adversarialperturbations. Secondly, we use the super-resolution image reconstruction networkto restore the image quality and then map the adversarial example to the clean image.Therefore, there is no need to modify the network structure of the classifier model, andit can be easily combined with other defense methods. Finally, we evaluate the algorithmwith MNIST, Fashion-MNIST, and CIFAR-10 datasets; the experimental results showthat our approach outperforms current techniques in the task of defending againstadversarial example attacks

Cite

CITATION STYLE

APA

Zhang, Y., Xu, H., Pei, C., & Yang, G. (2021). Adversarial example defense based on image reconstruction. PeerJ Computer Science, 7. https://doi.org/10.7717/PEERJ-CS.811

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free