Visual Loop Closure Detection Based on Stacked Convolutional and Autoencoder Neural Networks

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Simultaneous localization and mapping is the basis for solving the problem of robotic autonomous movement. Loop closure detection is vital for visual simultaneous localization and mapping. Correct detection of closed loops can effectively reduce the accumulation error of the robot poses, which plays an important role in building a globally consistent environment map. Traditional loop closure detection adopts the method of extracting handcrafted image features, which are sensitive to dynamic environments and are poor in robustness. In this paper, a method called stacked convolutional and autoencoder neural networks is proposed to automatically extract image features and perform dimensionality reduction processing. These features have multiple invariances in image transformation. Therefore, this method is robust to environmental changes. Experiments on public datasets show that the proposed method is superior to traditional methods in terms of accuracy, recall, and average accuracy, thereby validating the effectiveness of the proposed method.

Cite

CITATION STYLE

APA

Wang, F., Ruan, X., & Huang, J. (2019). Visual Loop Closure Detection Based on Stacked Convolutional and Autoencoder Neural Networks. In IOP Conference Series: Materials Science and Engineering (Vol. 563). Institute of Physics Publishing. https://doi.org/10.1088/1757-899X/563/5/052082

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free