Vision-based semantic-map building and localization

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A semantic-map building method is proposed to localize a robot in the semantic-map. Our semantic-map is organized by using SIFT feature-based object representation. In addition to semantic map, a vision-based relative localization is employed as a process model of extended Kalman filters, where optical flows and Levenberg-Marquardt least square minimization are incorporated to predict relative robot locations. Thus, robust SLAM performances can be obtained even under poor conditions in which localization cannot be achieved by classical odometry-based SLAM. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Jeong, S., Lim, J., Suh, I. H., & Choi, B. U. (2006). Vision-based semantic-map building and localization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4251 LNAI-I, pp. 559–568). Springer Verlag. https://doi.org/10.1007/11892960_68

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free