Memory Capacity of Networks with Stochastic Binary Synapses

12Citations
Citations of this article
56Readers
Mendeley users who have this article in their library.

Abstract

In standard attractor neural network models, specific patterns of activity are stored in the synaptic matrix, so that they become fixed point attractors of the network dynamics. The storage capacity of such networks has been quantified in two ways: the maximal number of patterns that can be stored, and the stored information measured in bits per synapse. In this paper, we compute both quantities in fully connected networks of N binary neurons with binary synapses, storing patterns with coding level (Formula presented.) , in the large (Formula presented.) and sparse coding limits ((Formula presented.)). We also derive finite-size corrections that accurately reproduce the results of simulations in networks of tens of thousands of neurons. These methods are applied to three different scenarios: (1) the classic Willshaw model, (2) networks with stochastic learning in which patterns are shown only once (one shot learning), (3) networks with stochastic learning in which patterns are shown multiple times. The storage capacities are optimized over network parameters, which allows us to compare the performance of the different models. We show that finite-size effects strongly reduce the capacity, even for networks of realistic sizes. We discuss the implications of these results for memory storage in the hippocampus and cerebral cortex.

Cite

CITATION STYLE

APA

Dubreuil, A. M., Amit, Y., & Brunel, N. (2014). Memory Capacity of Networks with Stochastic Binary Synapses. PLoS Computational Biology, 10(8). https://doi.org/10.1371/journal.pcbi.1003727

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free