Abstract
The goal of this paper is to widen the lens on language to include the manual modality. We look first at hearing children who are acquiring language from a spoken language model and find that even before they use speech to communicate, they use gesture. Moreover, those gestures precede, and predict, the acquisition of structures in speech. We look next at deaf children whose hearing losses prevent them from using the oral modality, and whose hearing parents have not presented them with a language model in the manual modality. These children fall back on the manual modality to communicate and use gestures, which take on many of the forms and functions of natural language. Thesehomemadegesturesystemsconstitutethefirststepintheemergenceof manual sign systems that are shared within deaf communities and are fullfledged languages. We end by widening the lens on signlanguage to include gesture and find that signers not only gesture, but they also use gesture in learning contexts just as speakers do. These findings suggest that what is key in gesture's ability to predict learning is its ability to add a second representational format to communication, rather than a second modality. Gesture can thus be language, assuming linguistic forms and functions, when other vehicles are not available; but when speech or sign is possible,gesture works along with language, providing an additional representational format that can promote learning. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Author supplied keywords
Cite
CITATION STYLE
Goldin-Meadow, S. (2014). Widening the lens: What the manual modality reveals about language, learning and cognition. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1651). https://doi.org/10.1098/rstb.2013.0295
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.