Coordinating Multiple Sensory Modalities While Learning to Reach

  • Schlesinger M
  • Parisi D
N/ACitations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

By the onset of reaching, young infants are already able to coordinate vision of a target with the felt position of their arm [7]. How is this coordination achieved? In order to investigate the hypothesis that infants learn to link vision and proprioception via the sense of touch, we implemented a recent computational model of reaching [22]. The model employs a genetic algorithm as a proxy for sensorimotor development in young infants. The three principal findings of our simulations were that tactile perception: (1) facilitates learning to coordinate vision and proprioception, (2) promotes an efficient reaching strategy, and (3) accelerates the remapping of vision and proprioception after perturbation of the multimodal map. Follow-up analyses of the model provide additional support for our hypothesis, and suggest that touch helps to coordinate vision and proprioception by providing a third, correlated information channel.

Cite

CITATION STYLE

APA

Schlesinger, M., & Parisi, D. (2001). Coordinating Multiple Sensory Modalities While Learning to Reach (pp. 113–122). https://doi.org/10.1007/978-1-4471-0281-6_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free