Towards camera based navigation in 3d maps by synthesizing depth images

1Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a novel approach to localize a robot equipped with an omnidirectional camera within a given 3D map. The pose estimate builds upon the synthesis of panoramic depth images, which are compared to the current view of the camera. We present an algorithmic approach to compute the similarity between these synthetic depth images and visual images, and show how to utilize this image matching for mobile robot navigation tasks, i.e. heading estimation, global localization, and navigation towards a target position. The presented method requires neither additional colour nor laser intensity information in the map. We provide a first evaluation of the involved image processing pipeline and a set of proof-of-concept experiments on a mobile robot. The presented approach supports different use cases like map sharing for heterogeneous robotics teams, or the usage of external sources of 3D maps like extruded floor plans.

Cite

CITATION STYLE

APA

Schubert, S., Neubert, P., & Protzel, P. (2017). Towards camera based navigation in 3d maps by synthesizing depth images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10454 LNAI, pp. 601–616). Springer Verlag. https://doi.org/10.1007/978-3-319-64107-2_49

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free