Abstract
We investigated the brain responses associated with the integration of speaker facial emotion into situations in which the speaker verbally describes an emotional event. In two EEG experiments, young adult participants were primed with a happy or sad speaker face. The target consisted of an emotionally positive or negative IAPS photo accompanied by a spoken emotional sentence describing that photo. The speaker's face either matched or mismatched the event-sentence valence. ERPs elicited by the adverb conveying sentence valence showed significantly larger negative mean amplitudes in the EPN and descriptively in the N400 time windows for positive speaker faces - negative event-sentences (vs. negatively matching prime-target trials). Our results suggest that young adults might allocate more processing resources to attend to and process negative (vs. positive) emotional situations when being primed with a positive (vs. negative) speaker face but not vice versa. Post-hoc analysis indicated that this interaction was driven by female participants. We extend previous eye-tracking findings with insights into the timing of the functional brain correlates implicated in integrating the valence of a speaker face into a multi-modal emotional situation.
Author supplied keywords
Cite
CITATION STYLE
Maquate, K., Kissler, J., & Knoeferle, P. (2023). Speakers’ emotional facial expressions modulate subsequent multi-modal language processing: ERP evidence. Language, Cognition and Neuroscience, 38(10), 1492–1513. https://doi.org/10.1080/23273798.2022.2108089
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.