Real-time depth completion based on LiDAR-stereo for autonomous driving

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The integration of multiple sensors is a crucial and emerging trend in the development of autonomous driving technology. The depth image obtained by stereo matching of the binocular camera is easily influenced by environment and distance. The point cloud of LiDAR has strong penetrability. However, it is much sparser than binocular images. LiDAR-stereo fusion can neutralize the advantages of the two sensors and maximize the acquisition of reliable three-dimensional information to improve the safety of automatic driving. Cross-sensor fusion is a key issue in the development of autonomous driving technology. This study proposed a real-time LiDAR-stereo depth completion network without 3D convolution to fuse point clouds and binocular images using injection guidance. At the same time, a kernel-connected spatial propagation network was utilized to refine the depth. The output of dense 3D information is more accurate for autonomous driving. Experimental results on the KITTI dataset showed that our method used real-time techniques effectively. Further, we demonstrated our solution's ability to address sensor defects and challenging environmental conditions using the p-KITTI dataset.

Cite

CITATION STYLE

APA

Wei, M., Zhu, M., Zhang, Y., Wang, J., & Sun, J. (2023). Real-time depth completion based on LiDAR-stereo for autonomous driving. Frontiers in Neurorobotics, 17. https://doi.org/10.3389/fnbot.2023.1124676

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free