Bimodal speech perception in early infancy

  • Kuhl P
  • Meltzoff A
N/ACitations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We previously reported preliminary data on infants' abilities to detect the cross-modal correspondences between the visual and auditory concomittants of speech [Kuhl and Meltzoff, J. Acoust. Soc. Am. Suppl. 1 70, S96 (1981)]. We have now completed testing on 32 4.5- to 5-month-old infants. Each was shown a filmed display of two faces, one producing the articulatory movements corresponding to the vowel [a] and the other producing the articulatory movements corresponding to the vowel [i]. One of the sound tracks (either [a] or [i]) was played in synchrony with the faces. The infants' visual fixations to the faces were scored by an observer who could not hear the sound track presented to the infant nor see the faces. Results demonstrated that the infants looked longer at the face matching the sound being presented than at the nonmatching face (p < 0.01). We hypothesized that the recognition of these cross-modal equivalences was based on the structural correspondence between a particular articulatory movement and a particular vowel sound, rather than on any temporal correspondence between a particular face-sound pair. This hypothesis predicts that if the crucial spectral information is removed from the vowels while the temporal information is preserved, performance should drop to chance. Experiment II tested this hypothesis using pure-tone stimuli. Performance fell to chance. [Supported by NSF.]

Cite

CITATION STYLE

APA

Kuhl, P. K., & Meltzoff, A. N. (1982). Bimodal speech perception in early infancy. The Journal of the Acoustical Society of America, 71(S1), S77–S78. https://doi.org/10.1121/1.2019555

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free