Multimodal integration of visual place cells and grid cells for navigation tasks of a real robot

17Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the present study, we propose a model of multimodal place cells merging visual and proprioceptive primitives. First we will briefly present our previous sensory-motor architecture, highlighting limitations of a visual-only based system. Then we will introduce a new model of proprioceptive localization, giving rise to the so-called grid cells, wich are congruent with neurobiological studies made on rodent. Finally we will show how a simple conditionning rule between both modalities can outperform visual-only driven models by producing robust multimodal place cells. Experiments show that this model enhances robot localization and also allows to solve some benchmark problems for real life robotics applications. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Jauffret, A., Cuperlier, N., Gaussier, P., & Tarroux, P. (2012). Multimodal integration of visual place cells and grid cells for navigation tasks of a real robot. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7426 LNAI, pp. 136–145). https://doi.org/10.1007/978-3-642-33093-3_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free