Adversarial attacks on image classification aim to make visually imperceptible changes to induce misclassification. Popular computational definitions of imperceptibility are largely based on mathematical convenience such as pixel p-norms. We perform a behavioral study that allows us to quantitatively demonstrate the mismatch between human perception and popular imperceptibility measures such as pixel p-norms, earth mover’s distance, structural similarity index, and deep net embedding. Our results call for a reassessment of current adversarial attack formulation.
CITATION STYLE
Sen, A., Zhu, X., Marshall, E., & Nowak, R. (2020). Popular Imperceptibility Measures in Visual Adversarial Attacks are Far from Human Perception. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12513 LNCS, pp. 188–199). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-64793-3_10
Mendeley helps you to discover research relevant for your work.