We show that sampling latent variables multiple times at a gradient step helps in improving a variational autoencoder and propose a simple and effective method to better exploit these latent variables through hidden state averaging. Consistent gains in performance on two different datasets, Penn Treebank and Yahoo, indicate the generalizability of our method.1.
CITATION STYLE
Kruengkrai, C. (2020). Better exploiting latent variables in text modeling. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 5527–5532). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1553
Mendeley helps you to discover research relevant for your work.