Reading fluent speech from talking faces: Typical brain networks and individual differences

  • Hall D
  • Fussell C
  • Summerfield A
  • 98

    Readers

    Mendeley users who have this article in their library.
  • 63

    Citations

    Citations of this article.

Abstract

Listeners are able to extract important linguistic information by viewing the talker's face-a process known as ''speechreading.'' Previous studies of speechreading present small closed sets of simple words and their results indicate that visual speech processing engages a wide network of brain regions in the temporal, frontal, and parietal lobes that are likely to underlie multiple stages of the receptive language system. The present study further explored this network in a large group of subjects by presenting naturally spoken sentences which tap the richer complexities of visual speech processing. Four different baselines (blank screen, static face, nonlinguistic facial gurning, and auditory speech) enabled us to determine the hierarchy of neural processing involved in speechreading and to test the claim that visual input reliably accesses sound-based representations in the auditory cortex. In contrast to passively viewing a blank screen, the static-face condition evoked activation bilaterally across the border of the fusiform gyrus and cerebellum, and in the medial superior frontal gyrus and left precentral gyrus (p < .05, whole brain corrected). With the static face as baseline, the gurning face evoked bilateral activation in the motion-sensitive region of the occipital cortex, whereas visual speech additionally engaged the middle temporal gyrus, inferior and middle frontal gyri, and the inferior parietal lobe, particularly in the left hemisphere. These latter regions are implicated in lexical stages of spoken language processing. Although auditory speech generated extensive bilateral activation across both superior and middle temporal gyri, the group-averaged pattern of speechreading activation failed to include any auditory regions along the superior temporal gyrus, suggesting that f luent visual speech does not always involve sound-based coding of the visual input. An important finding from the individual subject analyses was that activation in the superior temporal gyrus did reach significance (p < .001, small-volume corrected) for a subset of the group. Moreover, the extent of the left-sided superior temporal gyrus activity was strongly correlated with speechreading performance. Skilled speechreading was also associated with activations and deactivations in other brain regions, suggesting that individual differences ref lect the efficiency of a circuit linking sensory, perceptual, memory, cognitive, and linguistic processes rather than the operation of a single component process.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Authors

  • Deborah A. Hall

  • Clayton Fussell

  • A. Quentin Summerfield

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free