Generating Adversarial yet Inconspicuous Patches with a Single Image (Student Abstract)

7Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Deep neural networks have been shown vulnerable to adversarial patches, where exotic patterns can result in model's wrong prediction. Nevertheless, existing approaches to adversarial patch generation hardly consider the contextual consistency between patches and the image background, causing such patches to be easily detected by human observation. Additionally, these methods require a large amount of data for training, which is computationally expensive. To overcome these challenges, we propose an approach to generate adversarial yet inconspicuous patches with one single image. In our approach, adversarial patches are produced in a coarse-to-fine way with multiple scales of generators and discriminators. The selection of patch location is based on the perceptual sensitivity of victim models. Contextual information is encoded during the Min-Max training to make patches consistent with surroundings.

Cite

CITATION STYLE

APA

Luo, J., Bai, T., & Zhao, J. (2021). Generating Adversarial yet Inconspicuous Patches with a Single Image (Student Abstract). In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 18, pp. 15837–15838). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i18.17915

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free