The traditional Gradient Descent Back-propagation Neural Network Algorithm is widely used in solving many practical applications around the globe. Despite providing successful solutions, it possesses a problem of slow convergence and sometimes getting stuck at local minima. Several modifications are suggested to improve the convergence rate of Gradient Descent Backpropagation algorithm such as careful selection of initial weights and biases, learning rate, momentum, network topology, activation function and 'gain' value in the activation function. In a certain variation, the previous researchers demonstrated that in "feed-forward algorithm", the slope of activation function is directly influenced by 'gain' parameter. This research proposed an algorithm for improving the current working performance of Back-propagation algorithm by adaptively changing the momentum value and at the same time keeping the 'gain' parameter fixed for all nodes in the neural network. The performance of the proposed method known as 'Gradient Descent Method with Adaptive Momentum (GDAM)' is compared with the performances of 'Gradient Descent Method with Adaptive Gain (GDM-AG)' and 'Gradient Descent with Simple Momentum (GDM)'. The learning rate is kept fixed while sigmoid activation function is used throughout the experiments. The efficiency of the proposed method is demonstrated by simulations on three classification problems. Results show that GDAM is far better than previous methods with an accuracy ratio of 1.0 for classification problems and can be used as an alternative approach of BPNN. © 2011 Springer-Verlag.
CITATION STYLE
Rehman, M. Z., & Nawi, N. M. (2011). The effect of adaptive momentum in improving the accuracy of gradient descent back propagation algorithm on classification problems. In Communications in Computer and Information Science (Vol. 179 CCIS, pp. 380–390). https://doi.org/10.1007/978-3-642-22170-5_33
Mendeley helps you to discover research relevant for your work.