Depth estimation during fixational head movements in a humanoid robot

8Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Under natural viewing conditions, humans are not aware of continually performing small head and eye movements in the periods in between voluntary relocations of gaze. It has been recently shown that these fixational head movements provide useful depth information in the form of parallax. Here, we replicate this coordinated head and eye movements in a humanoid robot and describe a method for extracting the resulting depth information. Proprioceptive signals are interpreted by means of a kinematic model of the robot to compute the velocity of the camera. The resulting signal is then optimally integrated with the optic flow to estimate depth in the scene. We present the results of simulations which validate the proposed approach. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Antonelli, M., Del Pobil, A. P., & Rucci, M. (2013). Depth estimation during fixational head movements in a humanoid robot. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7963 LNCS, pp. 264–273). https://doi.org/10.1007/978-3-642-39402-7_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free