LLN-SLAM: A Lightweight Learning Network Semantic SLAM

0Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Semantic SLAM is a hot research subject in the field of computer vision in recent years. The mainstream semantic SLAM method can perform real-time semantic extraction. However, under resource-constrained platforms, the algorithm does not work properly. This paper proposes a lightweight semantic LLN-SLAM method for portable devices. The method extracts the semantic information through the matching of the Object detection and the point cloud segmentation projection. In order to ensure the running speed of the program, lightweight network MobileNet is used in the Object detection and Euclidean distance clustering is applied in the point cloud segmentation. In a typical augmented reality application scenario, there is no rule to avoid the movement of others outside the user in the scene. This brings a big error to the visual positioning. So, semantic information is used to assist the positioning. The algorithm does not extract features on dynamic semantic objects. The experimental results show that the method can run stably on portable devices. And the positioning error caused by the movement of the dynamic object can be effectively corrected while establishing the environmental semantic map.

Cite

CITATION STYLE

APA

Qu, X., & Li, W. (2019). LLN-SLAM: A Lightweight Learning Network Semantic SLAM. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11936 LNCS, pp. 253–265). Springer. https://doi.org/10.1007/978-3-030-36204-1_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free