An Adaptive Optimization Algorithm Based on Hybrid Power and Multidimensional Update Strategy

15Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Recently, the adaptive learning rate optimization algorithm has shown excellent performances in the field of deep learning. However, the exponential moving average method can lead to convergence problem in some cases, such as it only converges to the sub-optimal minimum. Although AMSGrad algorithm provides solutions for convergence problems, the actual performance is close to or even weaker than that of Adam. In this paper, a new updating rule is proposed based on mixed high power historical and current squared gradients to construct a targeted first-order optimization algorithm for the adaptive learning rate. This algorithm not only overcomes the convergence problems encountered in most current optimization algorithms but also has a quick convergence. It outperforms the state-of-the-art algorithms on various real-world datasets, i.e., the forecast root-mean-square error performance is improved by about 20% on average than that of Adam and AMSGrad algorithms in time series prediction tasks.

Cite

CITATION STYLE

APA

Hu, J., & Zheng, W. (2019). An Adaptive Optimization Algorithm Based on Hybrid Power and Multidimensional Update Strategy. IEEE Access, 7, 19355–19369. https://doi.org/10.1109/ACCESS.2019.2897639

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free