SWIM: SelectiveWrite-Verify for Computing-in-Memory Neural Accelerators

14Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Computing-in-Memory architectures based on non-volatile emerging memories have demonstrated great potential for deep neural network (DNN) acceleration thanks to their high energy efficiency. However, these emerging devices can suffer from significant variations during the mapping process (i.e., programming weights to the devices), and if left undealt with, can cause significant accuracy degradation. The non-ideality of weight mapping can be compensated by iterative programming with a write-verify scheme, i.e., reading the conductance and rewriting if necessary. In all existing works, such a practice is applied to every single weight of a DNN as it is being mapped, which requires extensive programming time. In this work, we show that it is only necessary to select a small portion of the weights for write-verify to maintain the DNN accuracy, thus achieving significant speedup. We further introduce a second derivative based technique SWIM, which only requires a single pass of forward and backpropagation, to efficiently select the weights that need write-verify. Experimental results on various DNN architectures for different datasets show that SWIM can achieve up to 10x programming speedup compared with conventional full-blown write-verify while attaining a comparable accuracy.

Cite

CITATION STYLE

APA

Yan, Z., Hu, X. S., & Shi, Y. (2022). SWIM: SelectiveWrite-Verify for Computing-in-Memory Neural Accelerators. In Proceedings - Design Automation Conference (pp. 277–282). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1145/3489517.3530459

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free