SpaRCe: Improved Learning of Reservoir Computing Systems Through Sparse Representations

15Citations
Citations of this article
46Readers
Mendeley users who have this article in their library.

Abstract

"Sparse"neural networks, in which relatively few neurons or connections are active, are common in both machine learning and neuroscience. While, in machine learning, "sparsity"is related to a penalty term that leads to some connecting weights becoming small or zero, in biological brains, sparsity is often created when high spiking thresholds prevent neuronal activity. Here, we introduce sparsity into a reservoir computing network via neuron-specific learnable thresholds of activity, allowing neurons with low thresholds to contribute to decision-making but suppressing information from neurons with high thresholds. This approach, which we term "SpaRCe,"optimizes the sparsity level of the reservoir without affecting the reservoir dynamics. The read-out weights and the thresholds are learned by an online gradient rule that minimizes an error function on the outputs of the network. Threshold learning occurs by the balance of two opposing forces: reducing interneuronal correlations in the reservoir by deactivating redundant neurons, while increasing the activity of neurons participating in correct decisions. We test SpaRCe on classification problems and find that threshold learning improves performance compared to standard reservoir computing. SpaRCe alleviates the problem of catastrophic forgetting, a problem most evident in standard echo state networks (ESNs) and recurrent neural networks in general, due to increasing the number of task-specialized neurons that are included in the network decisions.

References Powered by Scopus

Enhancing sparsity by reweightedℓ<inf>1</inf> minimization

4106Citations
N/AReaders
Get full text

The MNIST database of handwritten digit images for machine learning research

3492Citations
N/AReaders
Get full text

Statistical learning with sparsity: The lasso and generalizations

1873Citations
N/AReaders
Get full text

Cited by Powered by Scopus

A perspective on physical reservoir computing with nanomagnetic devices

36Citations
N/AReaders
Get full text

Reconfigurable reservoir computing in a magnetic metamaterial

21Citations
N/AReaders
Get full text

Contributions by metaplasticity to solving the Catastrophic Forgetting Problem

20Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Manneschi, L., Lin, A. C., & Vasilaki, E. (2023). SpaRCe: Improved Learning of Reservoir Computing Systems Through Sparse Representations. IEEE Transactions on Neural Networks and Learning Systems, 34(2), 824–838. https://doi.org/10.1109/TNNLS.2021.3102378

Readers over time

‘21‘22‘23‘24‘250481216

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 16

64%

Researcher 7

28%

Professor / Associate Prof. 2

8%

Readers' Discipline

Tooltip

Physics and Astronomy 9

43%

Computer Science 6

29%

Neuroscience 3

14%

Engineering 3

14%

Article Metrics

Tooltip
Mentions
Blog Mentions: 1

Save time finding and organizing research with Mendeley

Sign up for free
0