AdvGuard: Fortifying Deep Neural Networks Against Optimized Adversarial Example Attack

11Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Deep neural networks (DNNs) provide excellent performance in image recognition, speech recognition, video recognition, and pattern analysis. However, they are vulnerable to adversarial example attacks. An adversarial example, which is input to which a little bit of noise has been strategically added, appears normal to the human eye but will be misrecognized by the DNN. In this paper, we propose AdvGuard, a method for resisting adversarial example attacks. This defense method prevents the generation of adversarial examples by constructing a robust DNN that provides random confidence values. This method does not require training of adversarial examples, use of other processing modules, or the ability to perform input data filtering. In addition, a DNN constructed using the proposed scheme can defend against adversarial examples while maintaining its accuracy on the original samples. In the experimental evaluation, MNIST and CIFAR10 were used as datasets, and TensorFlow was used as a machine learning library. The results show that a DNN constructed using the proposed method can correctly classify adversarial examples with 100% and 99.5% accuracy on MNIST and CIFAR10, respectively.

Cite

CITATION STYLE

APA

Kwon, H., & Lee, J. (2024). AdvGuard: Fortifying Deep Neural Networks Against Optimized Adversarial Example Attack. IEEE Access, 12, 5345–5356. https://doi.org/10.1109/ACCESS.2020.3042839

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free