Several learning algorithms for feed-forward (FFN) neural networks have been developed, many of these algorithms are based on the gradient descent algorithm well-known in optimization theory which have poor performance in practical applications. In this paper we modify the Polak-Ribier conjugate gradient method to train feed forward neural network. Our modification is based on the secant equation (Quasi-Newton condition). The suggested algorithm is tested on some well known test problems and compared with other algorithms in this field.
CITATION STYLE
Al-Bayati, A., A. Saleh, I., & K. Abbo, K. (2011). Conjugate Gradient Back-propagation with Modified Polack –Rebier updates for training feed forward neural network. IRAQI JOURNAL OF STATISTICAL SCIENCES, 11(20), 164–173. https://doi.org/10.33899/iqjoss.2011.27897
Mendeley helps you to discover research relevant for your work.