Neuronal-Plasticity and Reward-Propagation Improved Recurrent Spiking Neural Networks

11Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Different types of dynamics and plasticity principles found through natural neural networks have been well-applied on Spiking neural networks (SNNs) because of their biologically-plausible efficient and robust computations compared to their counterpart deep neural networks (DNNs). Here, we further propose a special Neuronal-plasticity and Reward-propagation improved Recurrent SNN (NRR-SNN). The historically-related adaptive threshold with two channels is highlighted as important neuronal plasticity for increasing the neuronal dynamics, and then global labels instead of errors are used as a reward for the paralleling gradient propagation. Besides, a recurrent loop with proper sparseness is designed for robust computation. Higher accuracy and stronger robust computation are achieved on two sequential datasets (i.e., TIDigits and TIMIT datasets), which to some extent, shows the power of the proposed NRR-SNN with biologically-plausible improvements.

Cite

CITATION STYLE

APA

Jia, S., Zhang, T., Cheng, X., Liu, H., & Xu, B. (2021). Neuronal-Plasticity and Reward-Propagation Improved Recurrent Spiking Neural Networks. Frontiers in Neuroscience, 15. https://doi.org/10.3389/fnins.2021.654786

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free