Incremental learning of GAN for detecting multiple adversarial attacks

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural networks are vulnerable to adversarial attack. Carefully crafted small perturbations can cause misclassification of neural network classifiers. As adversarial attack is a serious potential problem in many neural network based applications and new attacks always come up, it’s urgent to explore the detection strategies that can adapt new attacks quickly. Moreover, the detector is hard to train with limited samples. To solve these problems, we propose a GAN based incremental learning framework with Jacobian-based data augmentation to detect adversarial samples. To prove the proposed framework works on multiple adversarial attacks, we implement FGSM, LocSearchAdv, PSO-based attack on MNIST and CIFAR-10 dataset. The experiments show that our detection framework performs well on these adversarial attacks.

Cite

CITATION STYLE

APA

Yi, Z., Yu, J., Li, S., Tan, Y., & Wu, Q. (2019). Incremental learning of GAN for detecting multiple adversarial attacks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11729 LNCS, pp. 673–684). Springer Verlag. https://doi.org/10.1007/978-3-030-30508-6_53

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free