A Stochastic Gradient Method with Biased Estimation for Faster Nonconvex Optimization

2Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A number of optimization approaches have been proposed for optimizing nonconvex objectives (e.g. deep learning models), such as batch gradient descent, stochastic gradient descent and stochastic variance reduced gradient descent. Theory shows these optimization methods can converge by using an unbiased gradient estimator. However, in practice biased gradient estimation can allow more efficient convergence to the vicinity since an unbiased approach is computationally more expensive. To produce fast convergence there are two trade-offs of these optimization strategies which are between stochastic/batch, and between biased/unbiased. This paper proposes an integrated approach which can control the nature of the stochastic element in the optimizer and can balance the trade-off of estimator between the biased and unbiased by using a hyper-parameter. It is shown theoretically and experimentally that this hyper-parameter can be configured to provide an effective balance to improve the convergence rate.

Cite

CITATION STYLE

APA

Bi, J., & Gunn, S. R. (2019). A Stochastic Gradient Method with Biased Estimation for Faster Nonconvex Optimization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11671 LNAI, pp. 337–349). Springer Verlag. https://doi.org/10.1007/978-3-030-29911-8_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free