Visual autonomy via 2D matching in rendered 3D models

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As they decrease in price and increase in fidelity, visually-textured 3D models offer a foundation for robotic spatial reasoning that can support a huge variety of platforms and tasks. This work investigates the capabilities, strengths, and drawbacks of a new sensor, the Matterport 3D camera, in the context of several robot applications. By using hierarchical 2D matching into a database of images rendered from a visually-textured 3D model, this work demonstrates that – when similar cameras are used – 2D matching into visually-textured 3D maps yields excellent performance on both global-localization and local-servoing tasks. When the 2D-matching spans very different camera transforms, however, we show that performance drops significantly. To handle this situation, we propose and prototype a map-alignment phase, in which several visual representations of the same spatial environment overlap: one to support the image-matching needed for visual localization, and the other carrying a global coordinate system needed for task accomplishment, e.g., point-to-point positioning.

Cite

CITATION STYLE

APA

Tenorio, D., Rivera, V., Medina, J., Leondar, A., Gaumer, M., & Dodds, Z. (2015). Visual autonomy via 2D matching in rendered 3D models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9474, pp. 373–385). Springer Verlag. https://doi.org/10.1007/978-3-319-27857-5_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free