Applying high-level understanding to visual localisation for mapping

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Digital cameras are often used on robots these days. One of the common limitations of these cameras is a relatively small field of view. Consequently, the camera is usually tilted downwards to see the floor immediately in front of the robot in order to avoid obstacles. With the camera tilted, vertical edges no longer appear vertical in the image. This feature can however be used to advantage to discriminate amongst straight line edges extracted from the image when searching for landmarks. It might also be used to estimate angles of rotation and distances moved between successive images in order to assist with localisation. Horizontal edges in the real world very rarely appear horizontal in the image due to perspective. By mapping these back to real-world coordinates, it is possible to use the locations of these edges in two successive images to measure rotations or translations of the robot. © 2007 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Taylor, T. (2007). Applying high-level understanding to visual localisation for mapping. Studies in Computational Intelligence, 76, 35–42. https://doi.org/10.1007/978-3-540-73424-6_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free