Adaptive learning rate via covariance matrix based preconditioning for deep neural networks

14Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.

Abstract

Adaptive learning rate algorithms such as RM-SProp are widely used for training deep neural networks. RMSProp offers efficient training since it uses first order gradients to approximate Hessianbased preconditioning. However, since the first order gradients include noise caused by stochastic optimization, the approximation may be inaccurate. In this paper, we propose a novel adaptive learning rate algorithm called SDProp. Its key idea is effective handling of the noise by preconditioning based on covariance matrix. For various neural networks, our approach is more efficient and effective than RMSProp and its variant.

Cite

CITATION STYLE

APA

Ida, Y., Fujiwara, Y., & Iwamura, S. (2017). Adaptive learning rate via covariance matrix based preconditioning for deep neural networks. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 0, pp. 1923–1929). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/267

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free