Lyapunov Stability Analysis of Gradient Descent-Learning Algorithm in Network Training

  • Banakar A
N/ACitations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

The Lyapunov stability theorem is applied to guarantee the convergence and stability of the learning algorithm for several networks. Gradient descent learning algorithm and its developed algorithms are one of the most useful learning algorithms in developing the networks. To guarantee the stability and convergence of the learning process, the upper bound of the learning rates should be investigated. Here, the Lyapunov stability theorem was developed and applied to several networks in order to guaranty the stability of the learning algorithm.

Cite

CITATION STYLE

APA

Banakar, A. (2011). Lyapunov Stability Analysis of Gradient Descent-Learning Algorithm in Network Training. ISRN Applied Mathematics, 2011, 1–12. https://doi.org/10.5402/2011/145801

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free