An Efficient Hardware Design for Accelerating Sparse CNNs with NAS-Based Models

28Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Deep convolutional neural networks (CNNs) have achieved remarkable performance at the cost of huge computation. As the CNN models become more complex and deeper, compressing CNNs to sparse by pruning the redundant connection in the networks has emerged as an attractive approach to reduce the amount of computation and memory requirement. On the other hand, FPGAs have been demonstrated to be an effective hardware platform to accelerate CNN inference. However, most existing FPGA accelerators focus on dense CNN models, which are inefficient when executing sparse models as most of the arithmetic operations involve addition and multiplication with zero operands. In this work, we propose an accelerator with software-hardware co-design for sparse CNNs on FPGAs. To efficiently deal with the irregular connections in the sparse convolutional layers, we propose a weight-oriented dataflow that exploits element-matrix multiplication as the key operation. Each weight is processed individually, which yields low decoding overhead. Then, we design an FPGA accelerator that features a tile look-up table (TLUT) and a channel multiplexer (CMUX). The TLUT is designed to match the index between sparse weights and input pixels. Using TLUT, the runtime decoding overhead is mitigated by using an efficient indexing operation. Moreover, we propose a weight layout to enable efficient on-chip memory access without conflicts. To cooperate with the weight layout, a CMUX is inserted to locate the address. Finally, we build a neural architecture search (NAS) engine that leverages the reconfigurability of FPGAs to generate an efficient CNN model and choose the optimal hardware design parameters. The experiments demonstrate that our accelerator can achieve 223.4-309.0 GOP/s for the modern CNNs on Xilinx ZCU102, which provides a $2.4\times $ - $12.9\times $ speedup over previous dense CNN accelerators on FPGAs. Our FPGA-aware NAS approach shows $2\times $ speedup over MobileNetV2 with 1.5% accuracy loss.

Cite

CITATION STYLE

APA

Liang, Y., Lu, L., Jin, Y., Xie, J., Huang, R., Zhang, J., & Lin, W. (2022). An Efficient Hardware Design for Accelerating Sparse CNNs with NAS-Based Models. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 41(3), 597–613. https://doi.org/10.1109/TCAD.2021.3066563

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free