Enhancing topology preservation during neural field development via wiring length minimization

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We recently proposed a recurrent neural network model for the development of dynamic neural fields [1]. The learning regime incorporates homeostatic processes, such that the network is able to self-organize and maintain a stable operation mode even in face of experience-driven changes in synaptic strengths. However, the learned mappings do not necessarily have to be topology preserving. Here we extend our model by incorporating another mechanism which changes the positions of neurons in the output space. This algorithm operates with a purely local objective function of minimizing the wiring length and runs in parallel to the above mentioned learning process. We experimentally show that the incorporation of this additional mechanism leads to a significant decrease in topological defects and further enhances the quality of the learned mappings. Additionally, the proposed algorithm is not limited to our network model; rather it can be applied to any type of self-organizing maps. © Springer-Verlag Berlin Heidelberg 2008.

Cite

CITATION STYLE

APA

Gläser, C., Joublin, F., & Goerick, C. (2008). Enhancing topology preservation during neural field development via wiring length minimization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5163 LNCS, pp. 593–602). https://doi.org/10.1007/978-3-540-87536-9_61

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free