Robotic systems that can create and use visual maps in realtime have obvious advantages in many applications, from automatic driving to mobile manipulation in the home. In this paper we describe a mapping system based on retaining stereo views of the environment that are collected as the robot moves. Connections among the views are formed by consistent geometric matching of their features. Out-of-sequence matching is the key problem: how to find connections from the current view to other corresponding views in the map. Our approach uses a vocabulary tree to propose candidate views, and a strong geometric filter to eliminate false positives a essentially, the robot continually re-recognizes where it is. We present experiments showing the utility of the approach on video data, including map building in large indoor and outdoor environments, map building without localization, and re-localization when lost.
CITATION STYLE
Konolige, K., Bowman, J., Chen, J. D., Mihelich, P., Calonder, M., Lepetit, V., & Fua, P. (2010). View-based maps. In Robotics: Science and Systems (Vol. 5, pp. 153–160). MIT Press Journals. https://doi.org/10.7551/mitpress/8727.003.0021
Mendeley helps you to discover research relevant for your work.