A cross-sentence latent variable model for semi-supervised text sequence matching

5Citations
Citations of this article
128Readers
Mendeley users who have this article in their library.

Abstract

We present a latent variable model for predicting the relationship between a pair of text sequences. Unlike previous auto-encoding-based approaches that consider each sequence separately, our proposed framework utilizes both sequences within a single model by generating a sequence that has a given relationship with a source sequence. We further extend the cross-sentence generating framework to facilitate semi-supervised training. We also define novel semantic constraints that lead the decoder network to generate semantically plausible and diverse sequences. We demonstrate the effectiveness of the proposed model from quantitative and qualitative experiments, while achieving state-of-the-art results on semi-supervised natural language inference and paraphrase identification.

Cite

CITATION STYLE

APA

Choi, J., Kim, T., & Lee, S. G. (2020). A cross-sentence latent variable model for semi-supervised text sequence matching. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 4747–4761). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1469

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free