Defenses Against Multi-sticker Physical Domain Attacks on Classifiers

4Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently, physical domain adversarial attacks have drawn significant attention from the machine learning community. One important attack proposed by Eykholt et al. can fool a classifier by placing black and white stickers on an object such as a road sign. While this attack may pose a significant threat to visual classifiers, there are currently no defenses designed to protect against this attack. In this paper, we propose new defenses that can protect against multi-sticker attacks. We present defensive strategies capable of operating when the defender has full, partial, and no prior information about the attack. By conducting extensive experiments, we show that our proposed defenses can outperform existing defenses against physical attacks when presented with a multi-sticker attack.

Cite

CITATION STYLE

APA

Zhao, X., & Stamm, M. C. (2020). Defenses Against Multi-sticker Physical Domain Attacks on Classifiers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12535 LNCS, pp. 202–219). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-66415-2_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free