An Universal Perturbation Generator for Black-Box Attacks Against Object Detectors

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the continuous development of deep neural networks (DNNs), it has become the main means of solving problems in the field of computer vision. However, recent research has shown that deep neural networks are vulnerable to well-designed adversarial examples. In this paper, we used a deep neural network to generate adversarial examples to attack black-box object detectors. We trained a generation network to produce universal perturbations, achieving a cross-task attack against black-box object detectors. We demonstrated the feasibility of task-generalizable attacks. Our attack generated efficient universal perturbations on classifiers then attack object detectors. We proved the effectiveness of our attack on two representative object detectors: Faster R-CNN based on proposal and regression-based YOLOv3.

Cite

CITATION STYLE

APA

Zhao, Y., Wang, K., Xue, Y., Zhang, Q., & Zhang, X. (2019). An Universal Perturbation Generator for Black-Box Attacks Against Object Detectors. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11910 LNCS, pp. 63–72). Springer. https://doi.org/10.1007/978-3-030-34139-8_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free