Real-time stereo visual SLAM in large-scale environments based on SIFT fingerprints

5Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a new method for real-time SLAM calculation applied to autonomous robot navigation in large-scale environments without restrictions. It is exclusively based on the visual information provided by a cheap wide-angle stereo camera. Our approach divide the global map into local sub-maps identified by the so-called SIFT fingerprint. At the sub-map level (low level SLAM), 3D sequential mapping of natural land-marks and the robot location/orientation are obtained using a top-down Bayesian method to model the dynamic behavior. A high abstraction level to reduce the global accumulated drift, keeping real-time constraints, has been added (high level SLAM). This uses a correction method based on the SIFT fingerprints taking for each sub-map. A comparison of the low SLAM level using our method and SIFT features has been carried out. Some experimental results using a real large environment are presented. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Schleicher, D., Bergasa, L. M., Ocaña, M., Barea, R., & López, E. (2007). Real-time stereo visual SLAM in large-scale environments based on SIFT fingerprints. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4739 LNCS, pp. 684–691). Springer Verlag. https://doi.org/10.1007/978-3-540-75867-9_86

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free