Environmental Detection for Autonomous Vehicles Based on Multi-View Image and 3D LiDAR Point Cloud Map

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, 3D point cloud and depth map fusion system for 3D environment detection is proposed. This algorithm uses the depth map generated by virtual perspective mapping technology to assist LiDAR point cloud map information matching, and distinguishes objects according to distance and color to facilitate observation. At the same time, the depth map is used to assist in the generation of two-dimensional point cloud information to repair the missing problems to achieve the effect of restoring the point cloud information. The method solves the problems of sparse point cloud information of distant objects, reflections caused by the material of cars, and light penetration caused by glass, so that the point cloud map information can be more accurate. After a series of experiments, the accuracy rate of the proposed method can reach 91.27%, the precision of matching completeness is 91.61%, the F1-measure is 92.37%, and the recall rate can reach 93.14%. Compared with most methods, our proposed method has better performance. The proposed method successfully solves the problems that LiDAR point cloud map data will be missing and sparse due to environmental factors to achieve high-precision point cloud map.

Cite

CITATION STYLE

APA

Fan, Y. C., Lin, G. H., Xiao, Y. S., & Yan, W. Z. (2023). Environmental Detection for Autonomous Vehicles Based on Multi-View Image and 3D LiDAR Point Cloud Map. IEEE Access, 11, 70408–70424. https://doi.org/10.1109/ACCESS.2023.3292118

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free