The generalized proportional-integral-derivative (PID) gradient descent back propagation algorithm

7Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The back-propagation learning rule is modified by using the classical gradient descent algorithm (which uses only a proportional term) with integral and derivative terms of the gradient. The effect of these terms on the convergence behaviour of the objective function is studied and compared with MOM (momentum equation). It is observed that, with an appropriate tuning of the proportional-integral-derivative (PID) parameters, the rate of convergence is greatly improved and the local minima can be overcome. The integral action also helps in locating a minimum quickly. A guideline is presented to appropriately tune the PID parameters and an "integral suppression scheme" is proposed that effectively uses the PID principles, resulting in faster convergence at a desired minimum. © 1995.

Cite

CITATION STYLE

APA

Vitthal, R., Sunthar, P., & Durgaprasada Rao, C. (1995). The generalized proportional-integral-derivative (PID) gradient descent back propagation algorithm. Neural Networks, 8(4), 563–569. https://doi.org/10.1016/0893-6080(94)00100-Z

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free