On-line SLAM using clustered landmarks with omnidirectional vision

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

The problem of SLAM (simultaneous localization and mapping) is a fundamental problem in autonomous robotics. It arises when a robot must create a map of the regions it has navigated while localizing itself on it, using results from one step to increase precision in another by eliminating errors inherent to the sensors. One common solution consists of establishing landmarks in the environment which are used as reference points for absolute localization estimates and form a sparse map that is iteratively refined as more information is obtained. This paper introduces a method of landmark selection and clustering in omnidirectional images for on-line SLAM, using the SIFT algorithm for initial feature extraction and assuming no prior knowledge of the environment. Visual sensors are an attractive way of collecting information from the environment, but tend to create an excessive amount of landmarks that are individually prone to false matches due to image noise and object similarities. By clustering several features in single objects, our approach eliminates landmarks that do not consistently represent the environment, decreasing computational cost and increasing the reliability of information incorporated. Tests conducted in real navigational situations show a significant improvement in performance without loss of quality. Copyright © 2010 by ABCM.

Cite

CITATION STYLE

APA

Okamoto, J., & Guizilini, V. C. (2010). On-line SLAM using clustered landmarks with omnidirectional vision. Journal of the Brazilian Society of Mechanical Sciences and Engineering, 32(5 SPEC. ISSUE), 468–476. https://doi.org/10.1590/s1678-58782010000500006

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free