Studying the effect of adaptive momentum in improving the accuracy of gradient descent back propagation algorithm on classification problems
The traditional Gradient Descent Back-propagation Neural Network Algorithm is widely used in solving many practical applications around the globe. Despite providing successful solutions, it possesses a problem of slow convergence and sometimes getting stuck at local minima. Several modifications are suggested to improve the convergence rate of Gradient Descent Backpropagation algorithm such as careful selection of initial weights and biases, learning rate, momentum, network topology, activation function and gain value in the activation function. In a certain variation, the previous researchers demonstrated that in feed-forward algorithm, the slope of activation function is directly influenced by gain parameter. This research proposed an algorithm for improving the current working performance of Back-propagation algorithm by adaptively changing the momentum value and at the same time keeping the gain parameter fixed for all nodes in the neural network. The performance of the proposed method known as Gradient Descent Method with Adaptive Momentum (GDAM) is compared with the performances of Gradient Descent Method with Adaptive Gain (GDM-AG) and Gradient Descent with Simple Momentum (GDM). The learning rate is kept fixed while sigmoid activation function is used throughout the experiments. The efficiency of the proposed method is demonstrated by simulations on three classification problems. Results show that GDAM is far better than previous methods with an accuracy ratio of 1.0 for classification problems and can be used as an alternative approach of BPNN.