In robotic systems with moving cameras control of gaze allows for image stabilization, tracking and attention switching. Proper integration of these capabilities lets the robot exploit the kinematic redundancy of the oculomotor system to improve tracking performance and extend the field of view, while at the same time stabilize vision to reduce image blur induced by the robot's own movements. Gaze may be driven not only by vision but also by other sensors (e.g. inertial sensors or motor encoders) that carry information about the robot's own movement. Humanoid robots have sophisticated oculomotor systems, usually mounting inertial devices and are therefore an ideal platform to study this problem. We present a complete architecture for gaze control of a humanoid robot. Our system is able to control the neck and the eyes in order to track a 3D cartesian fixation point in space. The redundancy of the kinematic problem is exploited to implement additional behaviors, namely passive gaze stabilization, saccadic movements, and vestibuloocular reflex. We implement this framework on the iCub's head, which is equipped with a 3-DoFs neck and a 3-DoFs eyes system and includes an inertial unit that provides feedback on the acceleration and angular speed of the head. The framework presented in this work can be applied to any robot equipped with an anthropomorphic head. In addition we provide an opensource, modular implementation, which has been already ported to other robotic platforms.
CITATION STYLE
Roncone, A., Pattacini, U., Metta, G., & Natale, L. (2016). A cartesian 6-DoF gaze controller for humanoid robots. In Robotics: Science and Systems (Vol. 12). MIT Press Journals. https://doi.org/10.15607/rss.2016.xii.022
Mendeley helps you to discover research relevant for your work.