Crossing variational autoencoders for answer retrieval

16Citations
Citations of this article
125Readers
Mendeley users who have this article in their library.

Abstract

Answer retrieval is to find the most aligned answer from a large set of candidates given a question. Learning vector representations of questions/answers is the key factor. Question-answer alignment and question/answer semantics are two important signals for learning the representations. Existing methods learned semantic representations with dual encoders or dual variational auto-encoders. The semantic information was learned from language models or question-to-question (answer-to-answer) generative processes. However, the alignment and semantics were too separate to capture the aligned semantics between question and answer. In this work, we propose to cross variational auto-encoders by generating questions with aligned answers and generating answers with aligned questions. Experiments show that our method outperforms the state-of-the-art answer retrieval method on SQuAD.

Cite

CITATION STYLE

APA

Yu, W., Wu, L., Zeng, Q., Tao, S., Deng, Y., & Jiang, M. (2020). Crossing variational autoencoders for answer retrieval. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 5635–5641). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-main.498

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free