The liquid state machine (LSM) is a model of recurrent spiking neural networks (SNNs) and provides an appealing brain-inspired computing paradigm for machine-learning applications. Moreover, operated by processing information directly on spiking events, the LSM is amenable to efficient event-driven hardware implementation. However, training SNNs is, in general, a difficult task as synaptic weights shall be updated based on neural firing activities while achieving a learning objective. In this article, we explore bio-plausible spike-timing-dependent-plasticity (STDP)mechanisms to train liquid statemachine models with and without supervision. First, we employ a supervised STDP rule to train the output layer of the LSM while delivering good classification performance. Furthermore, a hardware-friendly unsupervised STDP rule is leveraged to train the recurrent reservoir to further boost the performance.We pursue efficient hardware implementation of FPGA LSM accelerators by performing algorithm-level optimization of the two proposed training rules and exploiting the self-organizing behaviors naturally induced by STDP. Several recurrent spiking neural accelerators are built on a Xilinx Zync ZC-706 platform and trained for speech recognition with the TI46 speech corpus as the benchmark. Adopting the two proposed unsupervised and supervised STDP rules outperforms the recognition accuracy of a competitive non-STDP baseline training algorithm by up to 3.47%.
CITATION STYLE
Liu, Y., Yenamachintala, S. S., & Li, P. (2019). Energy-efficient FPGA spiking neural accelerators with supervised and unsupervised spike-timing-dependent-plasticity. ACM Journal on Emerging Technologies in Computing Systems, 15(3). https://doi.org/10.1145/3313866
Mendeley helps you to discover research relevant for your work.