Generating Invariance-Based Adversarial Examples: Bringing Humans Back into the Loop

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the major challenges in computer vision today is to align human and computer vision. Using an adversarial machine learning perspective, we investigate invariance-based adversarial examples, which highlight differences between computer vision and human perception. We conduct a study with 25 human subjects, collecting eye-gazing data and time-constrained classification performance, in order to study how occlusion-based perturbations impact human and machine performance on a classification task. Subsequently, we propose two adaptive methods to generate invariance-based adversarial examples, one based on occlusion and the other based on second picture patch-insertion. All methods leverage the eye-tracking data obtained from our experiments. Our results suggest that invariance-based adversarial examples are possible even for complex data sets but must be crafted with adequate diligence. Further research in this direction might help better align computer and human vision.

Cite

CITATION STYLE

APA

Merkle, F., Sirbu, M. R., Nocker, M., & Schöttle, P. (2024). Generating Invariance-Based Adversarial Examples: Bringing Humans Back into the Loop. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14365 LNCS, pp. 15–27). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-51023-6_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free