Artificial intelligence has aroused a global upsurge, which covers image recognition, video retrieval, speech recognition, automatic driving, and several other significant applications. As for artificial intelligence algorithms, neural network algorithms play a crucial role and have attracted considerable attention from numerous researchers. Moreover, neural networks have the characteristics of high flexibility, complex computation, and a large amount of data; which also indicates the requirements of high performance, low-power consumption, and flexibility for hardware computing platforms. This study aims to propose a reconfigurable hardware architecture to meet the flexibility requirements of a neural network. Based on the proposed architecture, the corresponding data access optimization schemes are explored to reduce the power consumption. In the optimization of the storage system, an acceleration scheme of neural network based on eDRAM and ReRAM scheme, which is used for computing and storage integration, satisfy the requirement of neural network computing. Regarding high-performance computing, we have proposed convolution optimization schemes based on integral and filter splitting feature reconstruction to enable low bit neural network operations to meet high-performance requirements.
CITATION STYLE
Yan, J., Zhang, Y., Tu, F., Yang, J., Zheng, S., Ouyang, P., … Yin, S. (2019). Research on low-power neural network computing accelerator. Scientia Sinica Informationis, 49(3), 314–333. https://doi.org/10.1360/N112018-00282
Mendeley helps you to discover research relevant for your work.