CD-UAP: Class discriminative universal adversarial perturbation

59Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.

Abstract

A single universal adversarial perturbation (UAP) can be added to all natural images to change most of their predicted class labels. It is of high practical relevance for an attacker to have flexible control over the targeted classes to be attacked, however, the existing UAP method attacks samples from all classes. In this work, we propose a new universal attack method to generate a single perturbation that fools a target network to misclassify only a chosen group of classes, while having limited influence on the remaining classes. Since the proposed attack generates a universal adversarial perturbation that is discriminative to targeted and non-targeted classes, we term it class discriminative universal adversarial perturbation (CD-UAP). We propose one simple yet effective algorithm framework, under which we design and compare various loss function configurations tailored for the class discriminative universal attack. The proposed approach has been evaluated with extensive experiments on various benchmark datasets. Additionally, our proposed approach achieves state-of-the-art performance for the original task of UAP attacking all classes, which demonstrates the effectiveness of our approach.

Cite

CITATION STYLE

APA

Zhang, C., Benz, P., Imtiaz, T., & Kweon, I. S. (2020). CD-UAP: Class discriminative universal adversarial perturbation. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 6754–6761). AAAI press. https://doi.org/10.1609/aaai.v34i04.6154

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free