3D semantic map construction based on point cloud and image fusion

3Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Accurate and robust positioning and mapping are the core functions of autonomous mobile robots, and the ability to analyse and understand scenes is also an important criterion for the intelligence of autonomous mobile robots. In the outdoor environment, most robots rely on GPS positioning. When the signal is weak, the positioning error will interfere with the mapping results, making the semantic map construction less robust. This research mainly designs a semantic map construction system that does not rely on GPS signals for large outdoor scenes. It mainly designs a feature extraction scheme based on the sampling characteristics of Livox-AVIA solid-state LiDAR. The factor graph optimisation model of frame pose and inertial measurement unit (IMU) pre-integrated pose, using a sliding window to fuse solid-state LiDAR and IMU data, fuse laser inertial odometry and camera target detection results, refer to the closest point distance and curvature for semantic information. The point cloud is used for semantic segmentation to realise the construction of a 3D semantic map in outdoor scenes. The experiment verifies that laser inertial navigation odometry based on factor map optimisation has better positioning accuracy and lower overall cumulative error at turning, and the 3D semantic map obtained on this basis performs well.

Cite

CITATION STYLE

APA

Li, H., Zhao, H., Ye, B., & Zhang, Y. (2023). 3D semantic map construction based on point cloud and image fusion. IET Cyber-Systems and Robotics, 5(1). https://doi.org/10.1049/csy2.12078

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free