An FPGA-based accelerator for deep neural network with novel reconfigurable architecture

11Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Due to the high parallelism, Data flow architecture is a common solution for deep neural network (DNN) acceleration, however, existing DNN accelerate solutions exhibit limited flexibility to diverse network models. This paper presents a novel reconfigurable architecture as DNN accelerate solution, which consists of circuit blocks all can be reconfigured to adapt to different networks, and maintain high throughput. The proposed architecture shows good transferability to diverse DNN models due to its reconfigurable processing element (PE) array, which can be adjusted to deal with various filter sizes of networks. In the meanwhile, according to proposed data reuse technique based on parameter proportion property of different layers in DNN, a reconfigurable on-chip buffer mechanism is raised. Moreover, the accelerator enhances its performance by exploiting the sparsity property of input feature map. Compared to other state-of-the-art solutions based on FPGA, our architecture achieves high performance, and presents good flexibility in the meantime.

Cite

CITATION STYLE

APA

Jia, H., Ren, D., & Zou, X. (2021). An FPGA-based accelerator for deep neural network with novel reconfigurable architecture. IEICE Electronics Express, 18(4). https://doi.org/10.1587/ELEX.18.20210012

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free