Connection Pruning for Deep Spiking Neural Networks with On-Chip Learning

9Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Long training time hinders the potential of the deep, large-scale Spiking Neural Network (SNN) with the on-chip learning capability to be realized on the embedded systems hardware. Our work proposes a novel connection pruning approach that can be applied during the on-chip Spike Timing Dependent Plasticity (STDP)-based learning to optimize the learning time and the network connectivity of the deep SNN. We applied our approach to a deep SNN with the Time To First Spike (TTFS) coding and has successfully achieved 2.1x speed-up and 64% energy savings in the on-chip learning and reduced the network connectivity by 92.83%, without incurring any accuracy loss. Moreover, the connectivity reduction results in 2.83x speed-up and 78.24% energy savings in the inference. Evaluation of our proposed approach on the Field Programmable Gate Array (FPGA) platform revealed 0.56% power overhead was needed to implement the pruning algorithm.

Cite

CITATION STYLE

APA

Nguyen, T. N. N., Veeravalli, B., & Fong, X. (2021). Connection Pruning for Deep Spiking Neural Networks with On-Chip Learning. In ACM International Conference Proceeding Series. Association for Computing Machinery. https://doi.org/10.1145/3477145.3477157

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free