Abstract
Autonomous vehicles rely heavily on sensors such as camera and LiDAR, which provide real-time information about their surroundings for the tasks of perception, planning and control. Typically, a LiDAR can only provide sparse point cloud owing to a limited number of scanning lines. By employing depth completion, a dense depth map can be generated by assigning each camera pixel a corresponding depth value. However, the existing depth completion convolutional neural networks are very complex that requires high-end GPUs for processing, and thus they are not applicable to real-time autonomous driving. In this article, a light-weight network is proposed for the task of LiDAR point cloud depth completion. With an astonishing 96.2% reduction in the number of parameters, it still achieves comparable performance (9.3% better in MAE but 3.9% worse in RMSE) to the state-of-the-art network. For real-time embedded platforms, depthwise separable technique is applied to both convolution and deconvolution operations and the number of parameters decreases further by a factor of 7.3, with only a small percentage increase in error performance. Moreover, a system-on-chip architecture for depth completion is developed on a PYNQ-based FPGA platform that achieves real-time processing for HDL-64E LiDAR at the speed 11.1 frame per second.
Author supplied keywords
Cite
CITATION STYLE
Bai, L., Zhao, Y., Elhousni, M., & Huang, X. (2020). DepthNet: Real-Time LiDAR Point Cloud Depth Completion for Autonomous Vehicles. IEEE Access, 8, 227825–227833. https://doi.org/10.1109/ACCESS.2020.3045681
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.