Our ability to perceive and recognize objects, people, and meaningful action events is a cognitive function of prime importance, which is characterized by an interplay of visual, auditory, and sensory-motor processing. One goal of sensory neuroscience is to better understand multisensory perception, including how information from auditory and visual systems may merge to create stable, unified representations of objects and actions in our environment. This chapter summarizes and compares results from 49 paradigms published over the past decade that have explicitly examined human brain regions associated with audio-visual interactions. A series of meta-analyses compare and contrast distinct cortical networks preferentially activated under five major types of audio-visual interactions: (1) matching spatial and/or temporal features of nonnatural objects, (2-3) matching crossmodal features characteristic of natural objects (moving versus static images), (4) associating artificial audio-visual pairings (e.g., written/spoken language), and (5) an examination of networks activated when auditory and visual stimuli are incongruent. These meta-analysis results are discussed in the context of cognitive theories regarding how object knowledge representations may mesh with the multiple parallel pathways that appear to mediate audio-visual perception.
CITATION STYLE
Lewis, J. W. (2010). Audio-visual perception of everyday natural objects - Hemodynamic studies in humans. In Multisensory Object Perception in the Primate Brain (pp. 155–190). Springer New York. https://doi.org/10.1007/978-1-4419-5615-6_10
Mendeley helps you to discover research relevant for your work.