Bottom-up and top-down computations in word- and face-selective cortex

  • Kay K
  • Yeatman J
N/ACitations
Citations of this article
115Readers
Mendeley users who have this article in their library.

Abstract

The ability to read a page of text or recognize a person's face depends on category-selective visual regions in ventral temporal cortex (VTC). To understand how these regions mediate word and face recognition, it is necessary to characterize how stimuli are represented and how this representation is used in the execution of a cognitive task. Here, we show that the response of a category-selective region in VTC can be computed as the degree to which the low-level properties of the stimulus match a category template. Moreover, we show that during execution of a task, the bottom-up representation is scaled by the intraparietal sulcus (IPS), and that the level of IPS engagement reflects the cognitive demands of the task. These results provide an account of neural processing in VTC in the form of a model that addresses both bottom-up and top-down effects and quantitatively predicts VTC responses.As your eyes scan this page, your visual system performs a series of computations that allow you to derive meaning from the printed words. The visual system solves this task with such apparent ease that you may never have thought about the challenges that your brain must overcome for you to read a page of text. The brain must overcome similar challenges to enable you to recognize the faces of your friends.Two factors affect how the neurons in the visual system respond to what you are looking at: the physical features of the object and your cognitive state (for example, your knowledge, past experiences, and the cognitive demands of the task at hand). To figure out exactly how these factors influence the responses of the neurons, Kay and Yeatman used functional magnetic resonance imaging to scan the brains of human volunteers as they viewed different images (some of which were of faces or words). The volunteers had to perform various tasks while viewing the images. These tasks included focusing their attention on a small dot, categorizing the image, and stating whether the image had previously been shown.From the brain imaging data, Kay and Yeatman developed a model of the brain circuits that enable faces and words to be recognized. The model separately characterizes the influence of physical features and the influence of cognitive state, and describes several different types of processing: how the brain represents what is seen, how it makes a decision about how to respond, and how it changes its own activity when carrying out the decision. The model has been made freely available online so that other researchers can reproduce and build upon Kay and Yeatman’s findings.The current model is not perfect, and it does not describe neural activity in fine detail. However, by obtaining new experimental measurements, the model could be systematically improved.

Cite

CITATION STYLE

APA

Kay, K. N., & Yeatman, J. D. (2017). Bottom-up and top-down computations in word- and face-selective cortex. ELife, 6. https://doi.org/10.7554/elife.22341

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free