Robustness Improvement of Visual Templates Matching Based on Frequency-Tuned Model in RatSLAM

12Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

This paper describes an improved brain-inspired simultaneous localization and mapping (RatSLAM) that extracts visual features from saliency maps using a frequency-tuned (FT) model. In the traditional RatSLAM algorithm, the visual template feature is organized as a one-dimensional vector whose values only depend on pixel intensity; therefore, this feature is susceptible to changes in illumination intensity. In contrast to this approach, which directly generates visual templates from raw RGB images, we propose an FT model that converts RGB images into saliency maps to obtain visual templates. The visual templates extracted from the saliency maps contain more of the feature information contained within the original images. Our experimental results demonstrate that the accuracy of loop closure detection was improved, as measured by the number of loop closures detected by our method compared with the traditional RatSLAM system. We additionally verified that the proposed FT model-based visual templates improve the robustness of familiar visual scene identification by RatSLAM.

Cite

CITATION STYLE

APA

Yu, S., Wu, J., Xu, H., Sun, R., & Sun, L. (2020). Robustness Improvement of Visual Templates Matching Based on Frequency-Tuned Model in RatSLAM. Frontiers in Neurorobotics, 14. https://doi.org/10.3389/fnbot.2020.568091

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free