Abstract
Despite several demonstrations of crossmodal semantic-congruency effect, it remains controversial as to whether it is a genuine perceptual phenomenon or instead it actually results from post-perceptual response bias such as decision or strategies (de Gelder and Bertelson, 2003). Here we combine the invisible stimuli with sounds to exclude the participants' awareness of the relation between visual and auditory stimuli. We render the visual events invisible by adopting the continuous flash suppression paradigm (Tsuchiya and Koch, 2005) in which the dynamic high-contrast visual patches were presented in one eye to suppress the target that was presented in the other eye. The semantic congruency between visual and auditory stimuli was manipulated and participants had to detect any parts of visual target. The results showed that the time needed to detect the visual target (ie, the release from suppression) was faster when it was accompanied by a semantically congruent sound than with an incongruent one. This study therefore demonstrates genuine multisensory integration at the semantic level. Furthermore, it also extends from previous studies with neglect blindsight patients (eg, de Gelder, Pourtois, and Weiskrantz, 2002) to normal participants based on their unawareness of the relation between visual and auditory information.
Cite
CITATION STYLE
Yang, Y.-H., & Yeh, S.-L. (2011). Semantic Congruency in Audiovisual Integration as Revealed by the Continuous Flash Suppression Paradigm. I-Perception, 2(8), 840–840. https://doi.org/10.1068/ic840
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.