On the encoder-decoder incompatibility in variational text modeling and beyond

1Citations
Citations of this article
111Readers
Mendeley users who have this article in their library.

Abstract

Variational autoencoders (VAEs) combine latent variables with amortized variational inference, whose optimization usually converges into a trivial local optimum termed posterior collapse, especially in text modeling. By tracking the optimization dynamics, we observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold. We argue that the trivial local optimum may be avoided by improving the encoder and decoder parameterizations since the posterior network is part of a transition map between them. To this end, we propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure and improves the encoder and decoder parameterizations via encoder weight sharing and decoder signal matching. We apply the proposed Coupled-VAE approach to various VAE models with different regularization, posterior family, decoder structure, and optimization strategy. Experiments on benchmark datasets (i.e., PTB, Yelp, and Yahoo) show consistently improved results in terms of probability estimation and richness of the latent space. We also generalize our method to conditional language modeling and propose Coupled-CVAE, which largely improves the diversity of dialogue generation on the Switchboard dataset.

Cite

CITATION STYLE

APA

Wu, C., Wang, P. Z., & Wang, W. Y. (2020). On the encoder-decoder incompatibility in variational text modeling and beyond. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 3449–3464). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.316

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free