Models of innate neural attractors and their applications for neural information processing

10Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.

Abstract

In this work we reveal and explore a new class of attractor neural networks, based on inborn connections provided by model molecular markers, the molecular marker based attractor neural networks (MMBANN). Each set of markers has a metric, which is used to make connections between neurons containing the markers. We have explored conditions for the existence of attractor states, critical relations between their parameters and the spectrum of single neuron models, which can implement the MMBANN. Besides, we describe functional models (perceptron and SOM),which obtain significant advantages over the traditional implementation of the semodels,while using MMBANN. Inparticular, aperceptron, based on MMBANN, gets specificity gain in orders of error probabilities values, MMBANN SOM obtains real neurophysiological meaning, the number of possible grandma cells increases 1000- fold with MMBANN. MMBANN have sets of attractor states, which can serve as finite grids for representation of variables incomputations. These grids may show dimensions of d=0, 1, 2, …. We work with static and dynamic attractor neural networks of the dimensions d = 0 and 1 .We also argue that the number of dimensions which can be represented by attractors of activities of neural networks with the number of elements N = 104does not exceed 8.

Cite

CITATION STYLE

APA

Solovyeva, K. P., Karandashev, I. M., Zhavoronkov, A., & Dunin-Barkowski, W. L. (2016). Models of innate neural attractors and their applications for neural information processing. Frontiers in Systems Neuroscience, 9(JAN2016). https://doi.org/10.3389/fnsys.2015.00178

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free