Visual self-localization with tiny images

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Self-localization of mobile robots is often performed visually, whereby the resolution of the images influences a lot the computation time. In this paper, we examine how a reduction of the image resolution affects localization accuracy. We downscale the images, preserving their aspect ratio, up to a tiny resolution of 15×11 and 20×15 pixels. Our results are based on extensive tests on different datasets that have been recorded indoors by a small differential drive robot and outdoors by a flying quadrocopter. Four well-known global image features and a pixel-wise image comparison method are compared under realistic conditions such as illumination changes and translations. Our results show that even when reducing the image resolution down to the tiny resolutions above, accurate localization is achievable. In this way, we can speed up the localization process considerably.

Cite

CITATION STYLE

APA

Hofmeister, M., Erhard, S., & Zell, A. (2009). Visual self-localization with tiny images. In Informatik aktuell (pp. 177–184). Kluwer Academic Publishers. https://doi.org/10.1007/978-3-642-10284-4_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free