Learning the optimal control of coordinated eye and head movements

28Citations
Citations of this article
100Readers
Mendeley users who have this article in their library.

Abstract

Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements. © 2011 Saeb et al.

Cite

CITATION STYLE

APA

Saeb, S., Weber, C., & Triesch, J. (2011). Learning the optimal control of coordinated eye and head movements. PLoS Computational Biology, 7(11). https://doi.org/10.1371/journal.pcbi.1002253

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free