A data independent approach to generate adversarial patches

7Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep neural networks are vulnerable to adversarial examples, i.e., carefully perturbed inputs designed to mislead the network at inference time. Recently, adversarial patch, with perturbations confined to a small and localized patch, emerged for its easy accessibility in real-world attack. However, existing attack strategies require training data on which the deep neural networks were trained, which makes them unsuitable for practical attacks since it is unreasonable for an attacker to obtain the training data. In this paper, we propose a data independent approach to generate adversarial patches (DiAP). The goal is to craft adversarial patches that can fool the target model on most of the images without any knowledge about the training data distribution. In the absence of data, we carry out non-targeted attacks by fooling the features learned at multiple layers of the deep neural network, and then employ the potential information of non-targeted adversarial patches to craft targeted adversarial patches. Extensive experiments demonstrate impressive attack success rates for DiAP. Particularly in the blackbox setting, DiAP outperforms state-of-the-art adversarial patch attack methods. The patches generated by DiAP also function well in real physical scenarios, and could be created offline and then broadly shared.

Cite

CITATION STYLE

APA

Zhou, X., Pan, Z., Duan, Y., Zhang, J., & Wang, S. (2021). A data independent approach to generate adversarial patches. Machine Vision and Applications, 32(3). https://doi.org/10.1007/s00138-021-01194-6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free