On Nonconvex Optimization for Machine Learning

86Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

Abstract

Gradient descent (GD) and stochastic gradient descent (SGD) are the workhorses of large-scale machine learning. While classical theory focused on analyzing the performance of these methods in convex optimization problems, the most notable successes in machine learning have involved nonconvex optimization, and a gap has arisen between theory and practice. Indeed, traditional analyses of GD and SGD show that both algorithms converge to stationary points efficiently. But these analyses do not take into account the possibility of converging to saddle points. More recent theory has shown that GD and SGD can avoid saddle points, but the dependence on dimension in these analyses is polynomial. For modern machine learning, where the dimension can be in the millions, such dependence would be catastrophic. We analyze perturbed versions of GD and SGD and show that they are truly efficient-their dimension dependence is only polylogarithmic. Indeed, these algorithms converge to second-order stationary points in essentially the same time as they take to converge to classical first-order stationary points.

References Powered by Scopus

Stochastic first- and zeroth-order methods for nonconvex stochastic programming

959Citations
N/AReaders
Get full text

Exponential convergence of Langevin distributions and their discrete approximations

773Citations
N/AReaders
Get full text

Low-rank matrix completion using alternating minimization

711Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Recent Theoretical Advances in Non-Convex Optimization

37Citations
N/AReaders
Get full text

The Complexity of Gradient Descent: CLS = PPAD PLS

31Citations
N/AReaders
Get full text

A Review of Machine Learning Classification Using Quantum Annealing for Real-World Applications

29Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Jin, C., Netrapalli, P., Ge, R., Kakade, S. M., & Jordan, M. I. (2021). On Nonconvex Optimization for Machine Learning. Journal of the ACM, 68(2). https://doi.org/10.1145/3418526

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 6

75%

Researcher 2

25%

Readers' Discipline

Tooltip

Computer Science 2

29%

Engineering 2

29%

Mathematics 2

29%

Chemical Engineering 1

14%

Article Metrics

Tooltip
Mentions
News Mentions: 1

Save time finding and organizing research with Mendeley

Sign up for free