Acceleration of LSTM with Structured Pruning Method on FPGA

45Citations
Citations of this article
52Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper focuses on accelerating long short-term memory (LSTM), which is one of the popular types of recurrent neural networks (RNNs). Because of the large number of weight memory accesses and high computation complexity with the cascade-dependent structure, it is a big challenge to efficiently implement the LSTM on field-programmable gate arrays (FPGAs). To speed up the inference on FPGA, considering its limited resource, a structured pruning method that can not only reduce the LSTM model's size without loss of prediction accuracy but also eliminate the imbalance computation and irregular memory accesses is proposed. Besides that, the hardware architecture of the compressed LSTM is designed to pursue high performance. As a result, the implementation of an LSTM language module on Stratix V GXA7 FPGA can achieve 85.2 GOPS directly on the sparse LSTM network by our method, corresponding to 681.6-GOPS effective throughput on the dense one, which shows that the proposed structured pruning algorithm makes 7.82 times speedup when only 1/8 parameters are reserved. We hope that our method can give an efficient way to accelerate the LSTM and similar recurrent neural networks when the resource-limited environment is emphasized.

Author supplied keywords

Cite

CITATION STYLE

APA

Wang, S., Lin, P., Hu, R., Wang, H., He, J., Huang, Q., & Chang, S. (2019). Acceleration of LSTM with Structured Pruning Method on FPGA. IEEE Access, 7, 62930–62937. https://doi.org/10.1109/ACCESS.2019.2917312

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free