Leveraging passage retrieval with generative models for open domain question answering

425Citations
Citations of this article
481Readers
Mendeley users who have this article in their library.

Abstract

Generative models for open domain question answering have proven to be competitive, without resorting to external knowledge. While promising, this approach requires to use models with billions of parameters, which are expensive to train and query. In this paper, we investigate how much these models can benefit from retrieving text passages, potentially containing evidence. We obtain state-of-the-art results on the Natural Questions and TriviaQA open benchmarks. Interestingly, we observe that the performance of this method significantly improves when increasing the number of retrieved passages. This is evidence that sequence-to-sequence models offers a flexible framework to efficiently aggregate and combine evidence from multiple passages.

Cite

CITATION STYLE

APA

Izacard, G., & Grave, E. (2021). Leveraging passage retrieval with generative models for open domain question answering. In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 874–880). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.eacl-main.74

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free