Deterministic convergence of an online gradient method with momentum

6Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An online gradient method with momentum for feedforward neural network is considered. The learning rate is set to be a constant and the momentum coefficient an adaptive variable. Both the weak and strong convergence results are proved, as well as the convergence rates for the error function and for the weight. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Zhang, N. (2006). Deterministic convergence of an online gradient method with momentum. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4113 LNCS-I, pp. 94–105). Springer Verlag. https://doi.org/10.1007/11816157_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free