Today, most autonomous vehicles (AVs) rely on LiDAR (Light Detection and Ranging) perception to acquire accurate information about their immediate surroundings. In LiDAR-based perception systems, semantic segmentation plays a critical role as it can divide LiDAR point clouds into meaningful regions according to human perception and provide AVs with semantic understanding of the driving environments. However, an implicit assumption for existing semantic segmentation models is that they are performed in a reliable and secure environment, which may not be true in practice. In this paper, we investigate adversarial attacks against LiDAR semantic segmentation in autonomous driving. Specifically, we propose a novel adversarial attack framework based on which the attacker can easily fool LiDAR semantic segmentation by placing some simple objects (e.g., cardboard and road signs) at some locations in the physical space. We conduct extensive real-world experiments to evaluate the performance of our proposed attack framework. The experimental results show that our attack can achieve more than 90% success rate in real-world driving environments. To the best of our knowledge, this is the first study on physically realizable adversarial attacks against LiDAR point cloud semantic segmentation with real-world evaluations.
CITATION STYLE
Zhu, Y., Miao, C., Hajiaghajani, F., Huai, M., Su, L., & Qiao, C. (2021). Adversarial Attacks against LiDAR Semantic Segmentation in Autonomous Driving. In SenSys 2021 - Proceedings of the 2021 19th ACM Conference on Embedded Networked Sensor Systems (pp. 329–342). Association for Computing Machinery, Inc. https://doi.org/10.1145/3485730.3485935
Mendeley helps you to discover research relevant for your work.