Gradients should stay on path: better estimators of the reverse- and forward KL divergence for normalizing flows

11Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We show how to use the path-wise derivative estimator for both the forward reverse Kullback-Leibler divergence for any practically invertible normalizing flow. The resulting path-gradient estimators are straightforward to implement, have lower variance, and lead not only to faster convergence of training but also to better overall approximation results compared to standard total gradient estimators. We also demonstrate that path-gradient training is less susceptible to mode-collapse. In light of our results, we expect that path-gradient estimators will become the new standard method to train normalizing flows for variational inference.

Cite

CITATION STYLE

APA

Vaitl, L., Nicoli, K. A., Nakajima, S., & Kessel, P. (2022). Gradients should stay on path: better estimators of the reverse- and forward KL divergence for normalizing flows. Machine Learning: Science and Technology, 3(4). https://doi.org/10.1088/2632-2153/ac9455

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free