Parameters compressing in deep learning

72Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the popularity of deep learning tools in image decomposition and natural language processing, how to support and store a large number of parameters required by deep learning algorithms has become an urgent problem to be solved. These parameters are huge and can be as many as millions. At present, a feasible direction is to use the sparse representation technique to compress the parameter matrix to achieve the purpose of reducing parameters and reducing the storage pressure. These methods include matrix decomposition and tensor decomposition. To let vector take advance of the compressing performance of matrix decomposition and tensor decomposition, we use reshaping and unfolding to let vector be the input and output of Tensor-Factorized Neural Networks. We analyze how reshaping can get the best compress ratio. According to the relationship between the shape of tensor and the number of parameters, we get a lower bound of the number of parameters. We take some data sets to verify the lower bound.

Cite

CITATION STYLE

APA

He, S., Li, Z., Tang, Y., Liao, Z., Li, F., & Lim, S. J. (2020). Parameters compressing in deep learning. Computers, Materials and Continua, 62(1), 321–336. https://doi.org/10.32604/cmc.2020.06130

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free