Deep networks often possess a vast number of parameters, and their significant redundancy in parameterization has become a widely-recognized property. This presents significant challenges and restricts many deep learning applications, making the focus on reducing the complexity of models while maintaining their powerful performance. In this paper, we present an overview of popular methods and review recent works on compressing and accelerating deep neural networks. We consider not only pruning methods but also quantization methods, and low-rank factorization methods. This review also intends to clarify these major concepts, and highlights their characteristics, advantages, and shortcomings.
CITATION STYLE
Alqahtani, A., Xie, X., & Jones, M. W. (2021). Literature review of deep network compression. Informatics, 8(4). https://doi.org/10.3390/informatics8040077
Mendeley helps you to discover research relevant for your work.