Effective Neural Network Training with a New Weighting Mechanism-Based Optimization Algorithm

36Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

First-order gradient-based optimization algorithms have been of core practical importance in the field of deep learning. In this paper, we propose a new weighting mechanism-based first-order gradient descent optimization algorithm, namely NWM-Adam, to resolve the undesirable convergence behavior of some optimization algorithms which employ fixed sized window of past gradients to scale the gradient updates and improve the performance of Adam and AMSGrad. The NWM-Adam is developed on the basis of the idea, i.e., placing more memory of the past gradients than the recent gradients. Furthermore, it can easily adjust the degree to which how much the past gradients weigh in the estimation. In order to empirically test the performance of our proposed NWM-Adam optimization algorithm, we compare it with other popular optimization algorithms in three well-known machine learning models, i.e., logistic regression, multi-layer fully connected neural networks, and deep convolutional neural networks. The experimental results show that the NWM-Adam can outperform other optimization algorithms.

Cite

CITATION STYLE

APA

Yu, Y., & Liu, F. (2019). Effective Neural Network Training with a New Weighting Mechanism-Based Optimization Algorithm. IEEE Access, 7, 72403–72410. https://doi.org/10.1109/ACCESS.2019.2919987

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free