Deconvolutional latent-variable model for text sequence matching

38Citations
Citations of this article
109Readers
Mendeley users who have this article in their library.

Abstract

A latent-variable model is introduced for text matching, inferring sentence representations by jointly optimizing generative and discriminative objectives. To alleviate typical optimization challenges in latent-variable models for text, we employ deconvolutional networks as the sequence decoder (generator), providing learned latent codes with more semantic information and better generalization. Our model, trained in an unsupervised manner, yields stronger empirical predictive performance than a decoder based on Long Short-Term Memory (LSTM), with less parameters and considerably faster training. Further, we apply it to text sequence-matching problems. The proposed model significantly outperforms several strong sentence-encoding baselines, especially in the semi-supervised setting.

Cite

CITATION STYLE

APA

Shen, D., Zhang, Y., Henao, R., Su, Q., & Carin, L. (2018). Deconvolutional latent-variable model for text sequence matching. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 5438–5445). AAAI press. https://doi.org/10.1609/aaai.v32i1.11991

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free