Conjugate Gradient Back-propagation with Modified Polack –Rebier updates for training feed forward neural network

  • Al-Bayati A
  • A. Saleh I
  • K. Abbo K
N/ACitations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Several learning algorithms for feed-forward (FFN) neural networks have been developed, many of these algorithms are based on the gradient descent algorithm well-known in optimization theory which have poor performance in practical applications. In this paper we modify the Polak-Ribier conjugate gradient method to train feed forward neural network. Our modification is based on the secant equation (Quasi-Newton condition). The suggested algorithm is tested on some well known test problems and compared with other algorithms in this field.

Cite

CITATION STYLE

APA

Al-Bayati, A., A. Saleh, I., & K. Abbo, K. (2011). Conjugate Gradient Back-propagation with Modified Polack –Rebier updates for training feed forward neural network. IRAQI JOURNAL OF STATISTICAL SCIENCES, 11(20), 164–173. https://doi.org/10.33899/iqjoss.2011.27897

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free