Evolving distributed representations for language with self-organizing maps

3Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a neural-competitive learning model of language evolution in which several symbol sequences compete to signify a given prepositional meaning. Both symbol sequences and prepositional meanings are represented by high-dimensional vectors of real numbers. A neural network learns to map between the distributed representations of the symbol sequences and the distributed representations of the propositions. Unlike previous neural network models of language evolution, our model uses a Kohonen Self-Organizing Map with unsupervised learning, thereby avoiding the computational slowdown and biological implausibility of back-propagation networks and the lack of scalability associated with Hebbian-learning networks. After several evolutionary generations, the network develops systematically regular mappings between meanings and sequences, of the sort traditionally associated with symbolic grammars. Because of the potential of neural-like representations for addressing the symbol-grounding problem, this sort of model holds a good deal of promise as a new explanatory mechanism for both language evolution and acquisition. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Levy, S. D., & Kirby, S. (2006). Evolving distributed representations for language with self-organizing maps. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4211 LNAI, pp. 57–71). Springer Verlag. https://doi.org/10.1007/11880172_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free