We have examined the underlying coordinate frame for pursuit learning by testing how broadly learning generalizes to different retinal loci and directions of target motion. Learned changes in pursuit were induced using double steps of target speed. Monkeys tracked a target that stepped obliquely away from the point of fixation, then moved smoothly either leftward or rightward. In each experimental session, we adapted the response to targets moving in one direction across one locus of the visual field by changing target speed during the initial catch-up saccade. Learning occurred in both presaccadic and postsaccadic eye velocity. The changes were specific to the adapted direction and did not generalize to the opposite direction of pursuit. To test the spatial scale of learning, we examined the responses to targets that moved across different parts of the visual field at the same velocity as the learning targets. Learning generalized partially to motion presented at untrained locations in the visual field, even those across the vertical meridian. Experiments with two sets of learning trials showed interference between learning at different sites in the visual field, suggesting that pursuit learning is not capable of spatial specificity. Our findings are consistent with the previous suggestions that pursuit learning is encoded in an intermediate representation that is neither strictly sensory nor strictly motor. Our data add the constraint that the site or sites of pursuit learning must process visual information on a fairly large spatial scale that extends across the horizontal and vertical meridians.
CITATION STYLE
Chou, I. H., & Lisberger, S. G. (2002). Spatial Generalization of Learning in Smooth Pursuit Eye Movements: Implications for the Coordinate Frame and Sites of Learning. Journal of Neuroscience, 22(11), 4728–4739. https://doi.org/10.1523/jneurosci.22-11-04728.2002
Mendeley helps you to discover research relevant for your work.