Sample compression, support vectors, and generalization in deep learning

4Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Even though Deep Neural Networks (DNNs) are widely celebrated for their practical performance, they possess many intriguing properties related to depth that are difficult to explain both theoretically and intuitively. Understanding how weights in deep networks coordinate together across layers to form useful learners has proven challenging, in part because the repeated composition of nonlinearities has proved intractable. This paper presents a reparameterization of DNNs as a linear function of a feature map that is locally independent of the weights. This feature map transforms depth-dependencies into simple tensor products and maps each input to a discrete subset of the feature space. Then, using a max-margin assumption, the paper develops a sample compression representation of the neural network in terms of the discrete activation state of neurons induced by s “support vectors”. The paper shows that the number of support vectors s relates with learning guarantees for neural networks through sample compression bounds, yielding a sample complexity of O(ns/ε) for networks with n neurons. Finally, the number of support vectors s is found to monotonically increase with width and label noise but decrease with depth.

Cite

CITATION STYLE

APA

Snyder, C., & Vishwanath, S. (2020). Sample compression, support vectors, and generalization in deep learning. IEEE Journal on Selected Areas in Information Theory, 1(1), 106–120. https://doi.org/10.1109/JSAIT.2020.2981864

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free