Abstract
The Lyapunov stability theorem is applied to guarantee the convergence and stability of the learning algorithm for several networks. Gradient descent learning algorithm and its developed algorithms are one of the most useful learning algorithms in developing the networks. To guarantee the stability and convergence of the learning process, the upper bound of the learning rates should be investigated. Here, the Lyapunov stability theorem was developed and applied to several networks in order to guaranty the stability of the learning algorithm.
Cite
CITATION STYLE
Banakar, A. (2011). Lyapunov Stability Analysis of Gradient Descent-Learning Algorithm in Network Training. ISRN Applied Mathematics, 2011, 1–12. https://doi.org/10.5402/2011/145801
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.