HLR: Generating adversarial examples by high-level representations

0Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Neural networks can be fooled by adversarial examples. Recently, many methods have been proposed to generate adversarial examples, but these works mainly concentrate on the pixel-wise information, which limits the transferability of adversarial examples. Different from these methods, we introduce perceptual module to extract the high-level representations and change the manifold of the adversarial examples. Besides, we propose a novel network structure to replace the generative adversarial network (GAN). The improved structure ensures high similarity of adversarial examples and promotes the stability of training process. Extensive experiments demonstrate that our method has significant improvement on the transferability. Furthermore, the adversarial training defence method is invalid for our attack.

Cite

CITATION STYLE

APA

Hao, Y., Li, T., Li, L., Jiang, Y., & Cheng, X. (2019). HLR: Generating adversarial examples by high-level representations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11729 LNCS, pp. 724–730). Springer Verlag. https://doi.org/10.1007/978-3-030-30508-6_57

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free