Adaptive natural gradient learning algorithms for unnormalized statistical models

2Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The natural gradient is a powerful method to improve the transient dynamics of learning by utilizing the geometric structure of the parameter space. Many natural gradient methods have been developed for maximum likelihood learning, which is based on Kullback-Leibler (KL) divergence and its Fisher metric. However, they require the computation of the normalization constant and are not applicable to statistical models with an analytically intractable normalization constant. In this study, we extend the natural gradient framework to divergences for the unnormalized statistical models: score matching and ratio matching. In addition, we derive novel adaptive natural gradient algorithms that do not require computationally demanding inversion of the metric and show their effectiveness in some numerical experiments. In particular, experimental results in a multi-layer neural network model demonstrate that the proposed method can escape from the plateau phenomena much faster than the conventional stochastic gradient descent method.

Cite

CITATION STYLE

APA

Karakida, R., Okada, M., & Amari, S. I. (2016). Adaptive natural gradient learning algorithms for unnormalized statistical models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9886 LNCS, pp. 427–434). Springer Verlag. https://doi.org/10.1007/978-3-319-44778-0_50

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free