Visual planning for autonomous mobile robot navigation

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

For autonomous mobile robots following a planned path, self-localization is a very important task. Cumulative errors derived from the different noisy sensors make it absolutely necessary. Absolute robot localization is commonly made measuring relative distance from the robot to previously learnt landmarks on the environment. Landmarks could be interest points, colored objects, or rectangular regions as posters or emergency signs, which are very useful and not intrusive beacons in human environments. This paper presents an active localization method: a visual planning function selects from a free collision path and a set of planar landmarks, a subset of visible landmarks and the best combination of camera parameters (pan, tilt and zoom) for positions sampled along the path. A visibility measurement and some utility measurements were defined in order to select for each position, the camera modality and the subset of landmarks that maximize these local criteria. Finally, a dynamic programming method is proposed in order to minimize saccadic movements all over the trajectory. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Marin-Hernandez, A., Devy, M., & Ayala-Ramirez, V. (2005). Visual planning for autonomous mobile robot navigation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3789 LNAI, pp. 1001–1011). https://doi.org/10.1007/11579427_102

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free