Where did I take that snapshot? Scene-based homing by image matching

188Citations
Citations of this article
93Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In homing tasks, the goal is often not marked by visible objects but must be inferred from the spatial relation to the visual cues in the surrounding scene. The exact computation of the goal direction would require knowledge about the distances to visible landmarks, information, which is not directly available to passive vision systems. However, if prior assumptions about typical distance distributions are used, a snapshot taken at the goal suffices to compute the goal direction from the current view. We show that most existing approaches to scene-based homing implicitly assume an isotropic landmark distribution. As an alternative, we propose a homing scheme that uses parameterized displacement fields. These are obtained from an approximation that incorporates prior knowledge about perspective distortions of the visual environment. A mathematical analysis proves that both approximations do not prevent the schemes from approaching the goal with arbitrary accuracy, but lead to different errors in the computed goal direction. Mobile robot experiments are used to test the theoretical predictions and to demonstrate the practical feasibility of the new approach.

Cite

CITATION STYLE

APA

Franz, M. O. (1998). Where did I take that snapshot? Scene-based homing by image matching. Biological Cybernetics, 79(3), 191–202. https://doi.org/10.1007/s004220050470

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free