Reducing Parameters of Neural Networks via Recursive Tensor Approximation

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Large-scale neural networks have attracted much attention for surprising results in various cognitive tasks such as object detection and image classification. However, the large number of weight parameters in the complex networks can be problematic when the models are deployed to embedded systems. In addition, the problems are exacerbated in emerging neuromorphic computers, where each weight parameter is stored within a synapse, the primary computational resource of the bio-inspired computers. We describe an effective way of reducing the parameters by a recursive tensor factorization method. Applying the singular value decomposition in a recursive manner decomposes a tensor that represents the weight parameters. Then, the tensor is approximated by algorithms minimizing the approximation error and the number of parameters. This process factorizes a given network, yielding a deeper, less dense, and weight-shared network with good initial weights, which can be fine-tuned by gradient descent.

Cite

CITATION STYLE

APA

Kwon, K., & Chung, J. (2022). Reducing Parameters of Neural Networks via Recursive Tensor Approximation. Electronics (Switzerland), 11(2). https://doi.org/10.3390/electronics11020214

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free