An online gradient method with momentum for feedforward neural network is considered. The learning rate is set to be a constant and the momentum coefficient an adaptive variable. Both the weak and strong convergence results are proved, as well as the convergence rates for the error function and for the weight. © Springer-Verlag Berlin Heidelberg 2006.
CITATION STYLE
Zhang, N. (2006). Deterministic convergence of an online gradient method with momentum. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4113 LNCS-I, pp. 94–105). Springer Verlag. https://doi.org/10.1007/11816157_10
Mendeley helps you to discover research relevant for your work.