Advanced ensemble adversarial example on unknown deep neural network classifiers

16Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Deep neural networks (DNNs) are widely used in many applications such as image, voice, and pattern recognition. However, it has recently been shown that a DNN can be vulnerable to a small distortion in images that humans cannot distinguish. This type of attack is known as an adversarial example and is a significant threat to deep learning systems. The unknown-target-oriented generalized adversarial example that can deceive most DNN classifiers is even more threatening. We propose a generalized adversarial example attack method that can effectively attack unknown classifiers by using a hierarchical ensemble method. Our proposed scheme creates advanced ensemble adversarial examples to achieve reasonable attack success rates for unknown classifiers. Our experiment results show that the proposed method can achieve attack success rates for an unknown classifier of up to 9.25% and 18.94% higher on MNIST data and 4.1% and 13% higher on CIFAR10 data compared with the previous ensemble method and the conventional baseline method, respectively.

Cite

CITATION STYLE

APA

Kwon, H., Kim, Y., Park, K. W., Yoon, H., & Choi, D. (2018). Advanced ensemble adversarial example on unknown deep neural network classifiers. IEICE Transactions on Information and Systems, E101D(10), 2485–2500. https://doi.org/10.1587/transinf.2018EDP7073

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free