3D Object Detection for Self-Driving Cars Using Video and LiDAR: An Ablation Study

12Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

Methods based on 64-beam LiDAR can provide very precise 3D object detection. However, highly accurate LiDAR sensors are extremely costly: a 64-beam model can cost approximately USD 75,000. We previously proposed SLS–Fusion (sparse LiDAR and stereo fusion) to fuse low-cost four-beam LiDAR with stereo cameras that outperform most advanced stereo–LiDAR fusion methods. In this paper, and according to the number of LiDAR beams used, we analyzed how the stereo and LiDAR sensors contributed to the performance of the SLS–Fusion model for 3D object detection. Data coming from the stereo camera play a significant role in the fusion model. However, it is necessary to quantify this contribution and identify the variations in such a contribution with respect to the number of LiDAR beams used inside the model. Thus, to evaluate the roles of the parts of the SLS–Fusion network that represent LiDAR and stereo camera architectures, we propose dividing the model into two independent decoder networks. The results of this study show that—starting from four beams—increasing the number of LiDAR beams has no significant impact on the SLS–Fusion performance. The presented results can guide the design decisions by practitioners.

Cite

CITATION STYLE

APA

Salmane, P. H., Rivera Velázquez, J. M., Khoudour, L., Mai, N. A. M., Duthon, P., Crouzil, A., … Velastin, S. A. (2023). 3D Object Detection for Self-Driving Cars Using Video and LiDAR: An Ablation Study. Sensors, 23(6). https://doi.org/10.3390/s23063223

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free