Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?

17Citations
Citations of this article
69Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite their recent popularity and well-known advantages, dense retrievers still lag behind sparse methods such as BM25 in their ability to reliably match salient phrases and rare entities in the query and to generalize to out-of-domain data. It has been argued that this is an inherent limitation of dense models. We rebut this claim by introducing the Salient Phrase Aware Retriever (SPAR), a dense retriever with the lexical matching capacity of a sparse model. We show that a dense Lexical Model Λ can be trained to imitate a sparse one, and SPAR is built by augmenting a standard dense retriever with Λ. Empirically, SPAR shows superior performance on a range of tasks including five question answering datasets, MS MARCO passage retrieval, as well as the EntityQuestions and BEIR benchmarks for out-of-domain evaluation, exceeding the performance of state-of-the-art dense and sparse retrievers.

Cite

CITATION STYLE

APA

Chen, X., Lakhotia, K., Oğuz, B., Gupta, A., Lewis, P., Peshterliev, S., … Yih, W. T. (2022). Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One? In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 250–262). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.407

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free