Mobile Robot Localization and Mapping Algorithm Based on the Fusion of Image and Laser Point Cloud

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Given the lack of scale information of the image features detected by the visual SLAM (simultaneous localization and mapping) algorithm, the accumulation of many features lacking depth information will cause scale blur, which will lead to degradation and tracking failure. In this paper, we introduce the lidar point cloud to provide additional depth information for the image features in estimating ego-motion to assist visual SLAM. To enhance the stability of the pose estimation, the front-end of visual SLAM based on nonlinear optimization is improved. The pole error is introduced in the pose estimation between frames, and the residuals are calculated according to whether the feature points have depth information. The residuals of features reconstruct the objective function and iteratively solve the robot’s pose. A keyframe-based method is used to optimize the pose locally in reducing the complexity of the optimization problem. The experimental results show that the improved algorithm achieves better results in the KITTI dataset and outdoor scenes. Compared with the pure visual SLAM algorithm, the trajectory error of the mobile robot is reduced by 52.7%. The LV-SLAM algorithm proposed in this paper has good adaptability and robust stability in different environments.

Cite

CITATION STYLE

APA

Dai, J., Li, D., Li, Y., Zhao, J., Li, W., & Liu, G. (2022). Mobile Robot Localization and Mapping Algorithm Based on the Fusion of Image and Laser Point Cloud. Sensors, 22(11). https://doi.org/10.3390/s22114114

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free