Spectral pruning: Compressing deep neural networks via spectral analysis and its generalization error

21Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Compression techniques for deep neural network models are becoming very important for the efficient execution of high-performance deep learning systems on edge-computing devices. The concept of model compression is also important for analyzing the generalization error of deep learning, known as the compression-based error bound. However, there is still huge gap between a practically effective compression method and its rigorous background of statistical learning theory. To resolve this issue, we develop a new theoretical framework for model compression and propose a new pruning method called spectral pruning based on this framework. We define the “degrees of freedom” to quantify the intrinsic dimensionality of a model by using the eigenvalue distribution of the covariance matrix across the internal nodes and show that the compression ability is essentially controlled by this quantity. Moreover, we present a sharp generalization error bound of the compressed model and characterize the bias-variance tradeoff induced by the compression procedure. We apply our method to several datasets to justify our theoretical analyses and show the superiority of the the proposed method.

Cite

CITATION STYLE

APA

Suzuki, T., Abe, H., Murata, T., Horiuchi, S., Ito, K., Wachi, T., … Nishimura, T. (2020). Spectral pruning: Compressing deep neural networks via spectral analysis and its generalization error. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 2839–2846). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/393

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free