Conjugate directions for stochastic gradient descent

9Citations
Citations of this article
43Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The method of conjugate gradients provides a very effective way to optimize large, deterministic systems by gradient descent. In its standard form, however, it is not amenable to stochastic approximation of the gradient. Here we explore ideas from conjugate gradient in the stochastic (online) setting, using fast Hessian-gradient products to set up low-dimensional Krylov subspaces within individual mini-batches. In our benchmark experiments the resulting online learning algorithms converge orders of magnitude faster than ordinary stochastic gradient descent. © Springer-Verlag Berlin Heidelberg 2002.

Cite

CITATION STYLE

APA

Schraudolph, N. N., & Graepel, T. (2002). Conjugate directions for stochastic gradient descent. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2415 LNCS, pp. 1351–1356). Springer Verlag. https://doi.org/10.1007/3-540-46084-5_218

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free