Bio-inspired model of spatial cognition

6Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present the results of an ongoing research in the area of symbol grounding. We develop a biologically inspired model for grounding the spatial terms that employs separate visual what and where subsystems that are integrated with the symbolic linguistic subsystem in the simplified neural model. The model grounds color, shape and spatial relations of two objects in 2D space. The images with two objects are presented to an artificial retina and five-word sentences describing them (e.g. "Red box above green circle") with phonological encoding serve as auditory inputs. The integrating multimodal module is implemented by Self-Organizing Map or Neural Gas algorithms in the second layer. We found out that using NG leads to better performance especially in case of the scenes with higher complexity, and current simulations also reveal that splitting the visual information and simplifying the objects to rectangular monochromatic boxes facilitates the performance of the where system and hence the overall functionality of the model. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Vavrečka, M., Farkaš, I., & Lhotská, L. (2011). Bio-inspired model of spatial cognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7062 LNCS, pp. 443–450). https://doi.org/10.1007/978-3-642-24955-6_53

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free