Combining multimodal sensory input for spatial learning

6Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

For robust self-localisation in real environments autonomous agents must rely upon multimodal sensory information. The relative importance of a sensory modality is not constant during the agent environment interaction. We study the interrelation between visual and tactile information in a spatial learning task. We adopt a biologically inspired approach to detect multimodal correlations based on the properties of neurons in the superior colliculus. Reward-based Hebbian learning is applied to train an active gating network to weigh individual senses depending on the current environmental conditions. The model is implemented and tested on a mobile robot platform. © Springer-Verlag Berlin Heidelberg 2002.

Cite

CITATION STYLE

APA

Strösslin, T., Krebser, C., Arleo, A., & Gerstner, W. (2002). Combining multimodal sensory input for spatial learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2415 LNCS, pp. 87–92). Springer Verlag. https://doi.org/10.1007/3-540-46084-5_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free