Unrestricted Black-Box Adversarial Attack Using GAN with Limited Queries

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Adversarial examples are inputs intentionally generated for fooling a deep neural network. Recent studies have proposed unrestricted adversarial attacks that are not norm-constrained. However, the previous unrestricted attack methods still have limitations to fool real-world applications in a black-box setting. In this paper, we present a novel method for generating unrestricted adversarial examples using GAN where an attacker can only access the top-1 final decision of a classification model. Our method, Latent-HSJA, efficiently leverages the advantages of a decision-based attack in the latent space and successfully manipulates the latent vectors for fooling the classification model. With extensive experiments, we demonstrate that our proposed method is efficient in evaluating the robustness of classification models with limited queries in a black-box setting. First, we demonstrate that our targeted attack method is query-efficient to produce unrestricted adversarial examples for a facial identity recognition model that contains 307 identities. Then, we demonstrate that the proposed method can also successfully attack a real-world celebrity recognition service. The code is available at https://github.com/ndb796/LatentHSJA.

Cite

CITATION STYLE

APA

Na, D., Ji, S., & Kim, J. (2023). Unrestricted Black-Box Adversarial Attack Using GAN with Limited Queries. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13801 LNCS, pp. 467–482). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-25056-9_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free