xMoCo: Cross momentum contrastive learning for open-domain question answering

31Citations
Citations of this article
84Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Dense passage retrieval has been shown to be an effective approach for information retrieval tasks such as open domain question answering. Under this paradigm, a dual-encoder model is learned to encode questions and passages separately into vector representations, and all the passage vectors are then pre-computed and indexed, which can be efficiently retrieved by vector space search during inference time. In this paper, we propose a new contrastive learning method called cross momentum contrastive learning (xMoCo), for learning a dual-encoder model for query-passage matching. Our method efficiently maintains a large pool of negative samples like the original MoCo, and by jointly optimizing question-to-passage and passage-to-question matching, enables using separate encoders for questions and passages. We evaluate our method on various open domain QA datasets, and the experimental results show the effectiveness of the proposed approach.

Cite

CITATION STYLE

APA

Yang, N., Wei, F., Jiao, B., Jiang, D., & Yang, L. (2021). xMoCo: Cross momentum contrastive learning for open-domain question answering. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (Vol. 1, pp. 6120–6129). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.477

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free