Research on SLAM algorithm of mobile robot based on the fusion of 2D LiDAR and depth camera

49Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper proposes a new Simultaneous Localization and Mapping (SLAM) method on the basis of graph-based optimization through the combination of the Light Detection and Ranging (LiDAR), RGB-D camera, encoder and Inertial Measurement Unit (IMU). It can conduct joint positioning of four sensors by taking advantaging of the unscented Kalman filter (UKF) to design the related strategy of the 2D LiDAR point cloud and RGB-D camera point cloud. 3D LiDAR point cloud information generated by the RGB-D camera under the 2D LiDAR has been added into the new SLAM method in the sequential registration stage, and it can match the 2D LiDAR point cloud and the 3D RGB-D point cloud by using the method of the Correlation Scan Matching (CSM); In the loop closure detection stage, this method can further verify the accuracy of the loop closure after the 2D LiDAR matching by describing 3D point cloud. Additionally, this new SLAM method has been verified feasibility and availability through the processes of theoretical derivation, simulation experiment and physical verification. As a result, the experiment shows that the multi-sensor SLAM framework designed has a good mapping effect, high precision and accuracy.

Cite

CITATION STYLE

APA

Mu, L., Yao, P., Zheng, Y., Chen, K., Wang, F., & Qi, N. (2020). Research on SLAM algorithm of mobile robot based on the fusion of 2D LiDAR and depth camera. IEEE Access, 8, 157628–157642. https://doi.org/10.1109/ACCESS.2020.3019659

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free