FlowPrior: Learning Expressive Priors for Latent Variable Sentence Models

9Citations
Citations of this article
58Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Variational autoencoders (VAEs) are widely used for latent variable modeling of text. We focus on variations that learn expressive prior distributions over the latent variable. We find that existing training strategies are not effective for learning rich priors, so we add the importance-sampled log marginal likelihood as a second term to the standard VAE objective to help when learning the prior. Doing so improves results for all priors evaluated, including a novel choice for sentence VAEs based on normalizing flows (NF). Priors parameterized with NF are no longer constrained to a specific distribution family, allowing a more flexible way to encode the data distribution. Our model, which we call FlowPrior, shows a substantial improvement in language modeling tasks compared to strong baselines. We demonstrate that FlowPrior learns an expressive prior with analysis and several forms of evaluation involving generation.

Cite

CITATION STYLE

APA

Ding, X., & Gimpel, K. (2021). FlowPrior: Learning Expressive Priors for Latent Variable Sentence Models. In NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 3242–3258). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.naacl-main.259

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free