Enabling efficient ReRAM-based neural network computing via crossbar structure adaptive optimization

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Resistive random-Access memory (ReRAM) based accelerators have been widely studied to achieve efficient neural network computing in speed and energy. Neural network optimization algorithms such as sparsity are developed to achieve efficient neural network computing on traditional computer architectures such as CPU and GPU. However, such computing efficiency improvement is hindered when deploying these algorithms on the ReRAM-based accelerator because of its unique crossbar-structural computations. And a specific algorithm and hardware co-optimization for the ReRAM-based architecture is still in a lack. In this work, we propose an efficient neural network computing framework that is specialized for the crossbar-structural computations on the ReRAM-based accelerators. The proposed framework includes a crossbar specific feature map pruning and an adaptive neural network deployment. Experimental results show our design can improve the computing accuracy by 9.1% compared with the state-of-The-Art sparse neural networks. Based on a famous ReRAM-based DNN accelerator, the proposed framework demonstrates up to 1.4× speedup, 4.3× power efficiency, and 4.4× area saving.

Cite

CITATION STYLE

APA

Liu, C., Yu, F., Qin, Z., & Chen, X. (2020). Enabling efficient ReRAM-based neural network computing via crossbar structure adaptive optimization. In ACM International Conference Proceeding Series. Association for Computing Machinery. https://doi.org/10.1145/3370748.3406581

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free