Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors

91Citations
Citations of this article
212Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a systematic study of the transferability of adversarial attacks on state-of-the-art object detection frameworks. Using standard detection datasets, we train patterns that suppress the objectness scores produced by a range of commonly used detectors, and ensembles of detectors. Through extensive experiments, we benchmark the effectiveness of adversarially trained patches under both white-box and black-box settings, and quantify transferability of attacks between datasets, object classes, and detector models. Finally, we present a detailed study of physical world attacks using printed posters and wearable clothes, and rigorously quantify the performance of such attacks with different metrics.

Cite

CITATION STYLE

APA

Wu, Z., Lim, S. N., Davis, L. S., & Goldstein, T. (2020). Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12349 LNCS, pp. 1–17). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58548-8_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free