Mathematical Models of Overparameterized Neural Networks

30Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Deep learning has received considerable empirical success in recent years. However, while many ad hoc tricks have been discovered by practitioners, until recently, there has been a lack of theoretical understanding for tricks invented in the deep learning literature. Known by practitioners that overparameterized neural networks (NNs) are easy to learn, in the past few years, there have been important theoretical developments in the analysis of overparameterized NNs. In particular, it was shown that such systems behave like convex systems under various restricted settings, such as for two-layer NNs, and when learning is restricted locally in the so-called neural tangent kernel space around specialized initializations. This article discusses some of these recent signs of progress leading to a significantly better understanding of NNs. We will focus on the analysis of two-layer NNs and explain the key mathematical models, with their algorithmic implications. We will then discuss challenges in understanding deep NNs and some current research directions.

Cite

CITATION STYLE

APA

Fang, C., Dong, H., & Zhang, T. (2021). Mathematical Models of Overparameterized Neural Networks. Proceedings of the IEEE, 109(5), 683–703. https://doi.org/10.1109/JPROC.2020.3048020

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free