Physical adversarial attacks by projecting perturbations

5Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Research on adversarial attacks analyses how to slightly manipulate patterns like images to make a classifier believe it recogises a pattern with a wrong label, although the correct label is obvious to humans. In traffic sign recognition, previous physical adversarial attacks were mainly based on stickers or graffity on the sign’s surface. In this paper, we propose and experimentally verify a new threat model that projects perturbations onto street signs via projectors or simulated laser pointers. No physical manipulation is required, which makes the attack difficult to detect. Attacks via projection imply new constraints like exclusively increasing colour intensities or manipulating certain colour channels. As exemplary experiments, we fool neural networks to classify stop signs as priority signs only by projecting optimised perturbations onto original traffic signs.

Cite

CITATION STYLE

APA

Worzyk, N., Kahlen, H., & Kramer, O. (2019). Physical adversarial attacks by projecting perturbations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11729 LNCS, pp. 649–659). Springer Verlag. https://doi.org/10.1007/978-3-030-30508-6_51

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free