Effect of depth and width on local minima in deep learning

43Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we analyze the effects of depth and width on the quality of local minima, without strong overparameterization and simplification assumptions in the literature. Without any simplification assumption, for deep nonlinear neural networks with the squared loss, we theoretically show that the quality of local minima tends to improve toward the global minimum value as depth and width increase. Furthermore, with a locally induced structure on deep nonlinear neural networks, the values of local minima of neural networks are theoretically proven to be no worse than the globally optimal values of corresponding classical machine learning models. We empirically support our theoretical observation with a synthetic data set, as well as MNIST, CIFAR-10, and SVHN data sets. When compared to previous studies with strong overparameterization assumptions, the results in this letter do not require overparameterization and instead show the gradual effects of overparameterization as consequences of general results.

Cite

CITATION STYLE

APA

Kawaguchi, K., Huang, J., & Kaelbling, L. P. (2019, July 1). Effect of depth and width on local minima in deep learning. Neural Computation. MIT Press Journals. https://doi.org/10.1162/neco_a_01195

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free