Towards computational modelling of neural multimodal integration based on the superior colliculus concept

4Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Information processing and responding to sensory input with appropriate actions are among the most important capabilities of the brain and the brain has specific areas that deal with auditory or visual processing. The auditory information is sent first to the cochlea, then to the inferior colliculus area and then later to the auditory cortex where it is further processed so that then eyes, head or both can be turned towards an object or location in response. The visual information is processed in the retina, various subsequent nuclei and then the visual cortex before again actions will be performed. However, how is this information integrated and what is the effect of auditory and visual stimuli arriving at the same time or at different times? Which information is processed when and what are the responses for multimodal stimuli? Multimodal integration is first performed in the Superior Colliculus, located in a subcortical part of the midbrain. In this chapter we will focus on this first level of multimodal integration, outline various approaches of modelling the superior colliculus, and suggest a model of multimodal integration of visual and auditory information. © 2009 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Ravulakollu, K., Knowles, M., Liu, J., & Wermter, S. (2009). Towards computational modelling of neural multimodal integration based on the superior colliculus concept. Studies in Computational Intelligence, 247, 269–291. https://doi.org/10.1007/978-3-642-04003-0_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free