Second order learning algorithm for back propagation neural networks

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Training of artificial neural networks (ANN) is normally a time-consuming task due to iteratively search imposed by the implicit nonlinearity of the network behavior. In this work an improvement to 'batch-mode' offline training methods, gradient-based or gradient free is proposed. The new procedure computes and improves the search direction along the negative gradient by introducing the 'gain' value of the activation functions and calculating the negative gradient on an error with respect to the weights as well as 'gain' values in minimizing the error function. The main advantage of this new procedure is that it is easy to implement into other faster optimization algorithms such as conjugate gradient method and Quasi-Newton method. The performance of the proposed method implemented into conjugate gradient method and Quasi-Newton method is demonstrated by comparing the simulation results to the neural network toolbox for the chosen benchmark. The simulation results clearly demonstrate that the proposed method significantly improves the convergence rate significantly faster the learning process of the general back propagation algorithm because of it new efficient search direction.

Cite

CITATION STYLE

APA

Nawi, N. M., Hamid, N. A., Samsudin, N. A., Mohd Yunus, M. A., & Ab Aziz, M. F. (2017). Second order learning algorithm for back propagation neural networks. International Journal on Advanced Science, Engineering and Information Technology, 7(4), 1162–1171. https://doi.org/10.18517/ijaseit.7.4.1956

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free