Abstract
Retrieving relevant contexts from a large corpus is a crucial step for tasks such as open-domain question answering and fact checking. Although neural retrieval outperforms traditional methods like tf-idf and BM25, its performance degrades considerably when applied to out-of-domain data. Driven by the question of whether a neural retrieval model can be universal and perform robustly on a wide variety of problems, we propose a multi-task trained model. Our approach not only surpasses previous methods in the few-shot setting, but also rivals specialised neural retrievers, even when in-domain training data is abundant. With the help of our retriever, we improve existing models for downstream tasks and closely match or improve the state of the art on multiple benchmarks.
Cite
CITATION STYLE
Maillard, J., Karpukhin, V., Petroni, F., Yih, W. T., Oguz, B., Stoyanov, V., & Ghosh, G. (2021). Multi-task retrieval for knowledge-intensive tasks. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (Vol. 1, pp. 1098–1111). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.89
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.