Better exploiting latent variables in text modeling

6Citations
Citations of this article
103Readers
Mendeley users who have this article in their library.

Abstract

We show that sampling latent variables multiple times at a gradient step helps in improving a variational autoencoder and propose a simple and effective method to better exploit these latent variables through hidden state averaging. Consistent gains in performance on two different datasets, Penn Treebank and Yahoo, indicate the generalizability of our method.1.

Cite

CITATION STYLE

APA

Kruengkrai, C. (2020). Better exploiting latent variables in text modeling. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 5527–5532). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1553

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free