Reducing ANN-SNN Conversion Error through Residual Membrane Potential

27Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Spiking Neural Networks (SNNs) have received extensive academic attention due to the unique properties of low power consumption and high-speed computing on neuromorphic chips. Among various training methods of SNNs, ANN-SNN conversion has shown the equivalent level of performance as ANNs on large-scale datasets. However, unevenness error, which refers to the deviation caused by different temporal sequences of spike arrival on activation layers, has not been effectively resolved and seriously suffers the performance of SNNs under the condition of short time-steps. In this paper, we make a detailed analysis of unevenness error and divide it into four categories. We point out that the case of the ANN output being zero while the SNN output being larger than zero accounts for the largest percentage. Based on this, we theoretically prove the sufficient and necessary conditions of this case and propose an optimization strategy based on residual membrane potential to reduce unevenness error. The experimental results show that the proposed method achieves state-of-the-art performance on CIFAR-10, CIFAR-100, and ImageNet datasets. For example, we reach top-1 accuracy of 64.32% on ImageNet with 10-steps. To the best of our knowledge, this is the first time ANN-SNN conversion can simultaneously achieve high accuracy and ultra-low-latency on the complex dataset. Code is available at https://github.com/hzc1208/ANN2SNN SRP.

References Powered by Scopus

Deep residual learning for image recognition

174312Citations
N/AReaders
Get full text

ImageNet: A Large-Scale Hierarchical Image Database

51136Citations
N/AReaders
Get full text

Gradient-based learning applied to document recognition

44099Citations
N/AReaders
Get full text

Cited by Powered by Scopus

TCJA-SNN: Temporal-Channel Joint Attention for Spiking Neural Networks

19Citations
N/AReaders
Get full text

Gated Attention Coding for Training High-Performance and Efficient Spiking Neural Networks

10Citations
N/AReaders
Get full text

AC<sup>2</sup>AS: Activation Consistency Coupled ANN-SNN framework for fast and memory-efficient SNN training

8Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Hao, Z., Bu, T., Ding, J., Huang, T., & Yu, Z. (2023). Reducing ANN-SNN Conversion Error through Residual Membrane Potential. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023 (Vol. 37, pp. 11–21). AAAI Press. https://doi.org/10.1609/aaai.v37i1.25071

Readers over time

‘23‘24036912

Readers' Seniority

Tooltip

Professor / Associate Prof. 1

50%

PhD / Post grad / Masters / Doc 1

50%

Readers' Discipline

Tooltip

Physics and Astronomy 1

33%

Computer Science 1

33%

Engineering 1

33%

Save time finding and organizing research with Mendeley

Sign up for free
0