Adversarial Pixel Masking: A Defense against Physical Attacks for Pre-trained Object Detectors

20Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Object detection based on pre-trained deep neural networks (DNNs) has achieved impressive performance and enabled many applications. However, DNN-based object detectors are shown to be vulnerable to physical adversarial attacks. Despite that recent efforts have been made to defend against these attacks, they either use strong assumptions or become less effective with pre-trained object detectors. In this paper, we propose adversarial pixel masking (APM), a defense against physical attacks, which is designed specifically for pre-trained object detectors. APM does not require any assumptions beyond the "patch-like"nature of a physical attack and can work with different pre-trained object detectors of different architectures and weights, making it a practical solution in many applications. We conduct extensive experiments, and the empirical results show that APM can significantly improve model robustness without significantly degrading clean performance.

Cite

CITATION STYLE

APA

Chiang, P. H., Chan, C. S., & Wu, S. H. (2021). Adversarial Pixel Masking: A Defense against Physical Attacks for Pre-trained Object Detectors. In MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia (pp. 1856–1865). Association for Computing Machinery, Inc. https://doi.org/10.1145/3474085.3475338

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free