A Deep Learning-Based Visual Map Generation for Mobile Robot Navigation

3Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Visual map-based robot navigation is a strategy that only uses the robot vision system, involving four fundamental stages: learning or mapping, localization, planning, and navigation. Therefore, it is paramount to model the environment optimally to perform the aforementioned stages. In this paper, we propose a novel framework to generate a visual map for environments both indoors and outdoors. The visual map comprises key images sharing visual information between consecutive key images. This learning stage employs a pre-trained local feature transformer (LoFTR) constrained with a 3D projective transformation (a fundamental matrix) between two consecutive key images. Outliers are efficiently detected using marginalizing sample consensus (MAGSAC) while estimating the fundamental matrix. We conducted extensive experiments to validate our approach in six different datasets and compare its performance against hand-crafted methods.

Cite

CITATION STYLE

APA

García-Pintos, C. A., Aldana-Murillo, N. G., Ovalle-Magallanes, E., & Martínez, E. (2023). A Deep Learning-Based Visual Map Generation for Mobile Robot Navigation. Eng, 4(2), 1616–1634. https://doi.org/10.3390/eng4020092

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free