Robust adversarial objects against deep learning models

89Citations
Citations of this article
59Readers
Mendeley users who have this article in their library.

Abstract

Previous work has shown that Deep Neural Networks (DNNs), including those currently in use in many fields, are extremely vulnerable to maliciously crafted inputs, known as adversarial examples. Despite extensive and thorough research of adversarial examples in many areas, adversarial 3D data, such as point clouds, remain comparatively unexplored. The study of adversarial 3D data is crucial considering its impact in real-life, high-stakes scenarios including autonomous driving. In this paper, we propose a novel adversarial attack against PointNet++, a deep neural network that performs classification and segmentation tasks using features learned directly from raw 3D points. In comparison to existing works, our attack generates not only adversarial point clouds, but also robust adversarial objects that in turn generate adversarial point clouds when sampled both in simulation and after construction in real world. We also demonstrate that our objects can bypass existing defense mechanisms designed especially against adversarial 3D data.

Cite

CITATION STYLE

APA

Tsai, T., Yang, K., Ho, T. Y., & Jin, Y. (2020). Robust adversarial objects against deep learning models. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 954–962). AAAI press. https://doi.org/10.1609/aaai.v34i01.5443

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free