Network approximation using tensor sketching

11Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep neural networks are powerful learning models that achieve state-of-the-art performance on many computer vision, speech, and language processing tasks. In this paper, we study a fundamental question that arises when designing deep network architectures: Given a target network architecture can we design a “smaller” network architecture that “approximates” the operation of the target network? The question is, in part, motivated by the challenge of parameter reduction (compression) in modern deep neural networks, as the ever increasing storage and memory requirements of these networks pose a problem in resource constrained environments. In this work, we focus on deep convolutional neural network architectures, and propose a novel randomized tensor sketching technique that we utilize to develop a unified framework for approximating the operation of both the convolutional and fully connected layers. By applying the sketching technique along different tensor dimensions, we design changes to the convolutional and fully connected layers that substantially reduce the number of effective parameters in a network. We show that the resulting smaller network can be trained directly and has a classification accuracy that is comparable to the original network.

Cite

CITATION STYLE

APA

Kasiviswanathan, S. P., Narodytska, N., & Jin, H. (2018). Network approximation using tensor sketching. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 2319–2325). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/321

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free