Active Perception Using Light Curtains for Autonomous Driving

7Citations
Citations of this article
56Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Most real-world 3D sensors such as LiDARs perform fixed scans of the entire environment, while being decoupled from the recognition system that processes the sensor data. In this work, we propose a method for 3D object recognition using light curtains, a resource-efficient controllable sensor that measures depth at user-specified locations in the environment. Crucially, we propose using prediction uncertainty of a deep learning based 3D point cloud detector to guide active perception. Given a neural network’s uncertainty, we develop a novel optimization algorithm to optimally place light curtains to maximize coverage of uncertain regions. Efficient optimization is achieved by encoding the physical constraints of the device into a constraint graph, which is optimized with dynamic programming. We show how a 3D detector can be trained to detect objects in a scene by sequentially placing uncertainty-guided light curtains to successively improve detection accuracy. Links to code can be found on the project webpage.

Cite

CITATION STYLE

APA

Ancha, S., Raaj, Y., Hu, P., Narasimhan, S. G., & Held, D. (2020). Active Perception Using Light Curtains for Autonomous Driving. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12350 LNCS, pp. 751–766). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58558-7_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free