Popular Imperceptibility Measures in Visual Adversarial Attacks are Far from Human Perception

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Adversarial attacks on image classification aim to make visually imperceptible changes to induce misclassification. Popular computational definitions of imperceptibility are largely based on mathematical convenience such as pixel p-norms. We perform a behavioral study that allows us to quantitatively demonstrate the mismatch between human perception and popular imperceptibility measures such as pixel p-norms, earth mover’s distance, structural similarity index, and deep net embedding. Our results call for a reassessment of current adversarial attack formulation.

Cite

CITATION STYLE

APA

Sen, A., Zhu, X., Marshall, E., & Nowak, R. (2020). Popular Imperceptibility Measures in Visual Adversarial Attacks are Far from Human Perception. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12513 LNCS, pp. 188–199). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-64793-3_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free