Crossmodal integration of emotional information from face and voice in the infant brain

111Citations
Citations of this article
241Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We examined 7-month-old infants' processing of emotionally congruent and incongruent face-voice pairs using ERP measures. Infants watched facial expressions (happy or angry) and, after a delay of 400 ms, heard a word spoken with a prosody that was either emotionally congruent or incongruent with the face being presented. The ERP data revealed that the amplitude of a negative component and a subsequent positive component in infants' ERPs varied as a function of crossmodal emotional congruity. An emotionally incongruent prosody elicited a larger negative component in infants' ERPs than did an emotionally congruent prosody. Conversely, the amplitude of infants' positive component was larger to emotionally congruent than to incongruent prosody. Previous work has shown that an attenuation of the negative component and an enhancement of the later positive component in infants' ERPs reflect the recognition of an item. Thus, the current findings suggest that 7-month-olds integrate emotional information across modalities and recognize common affect in the face and voice. © 2006 Blackwell Publishing Ltd.

Cite

CITATION STYLE

APA

Grossmann, T., Striano, T., & Friederic, A. D. (2006). Crossmodal integration of emotional information from face and voice in the infant brain. Developmental Science, 9(3), 309–315. https://doi.org/10.1111/j.1467-7687.2006.00494.x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free