Sparse annotation poses persistent challenges to training dense retrieval models, for example by distorting the training signal when unlabeled relevant documents are used spuriously as negatives in contrastive learning. To alleviate this problem, we introduce evidence-based label smoothing, a novel, computationally efficient method that prevents penalizing the model for assigning high relevance to false negatives. To compute the target relevance distribution over candidate documents within the ranking context of a given query, those candidates most similar to the ground truth are assigned a nonzero relevance probability based on the degree of their similarity to the ground-truth document(s). To estimate relevance we leverage an improved similarity metric based on reciprocal nearest neighbors, which can also be used independently to rerank candidates in post-processing. Through extensive experiments on two large-scale ad hoc text retrieval datasets, we demonstrate that reciprocal nearest neighbors can improve the ranking effectiveness of dense retrieval models, both when used for label smoothing, as well as for reranking. This indicates that by considering relationships between documents and queries beyond simple geometric distance we can effectively enhance the ranking context.
CITATION STYLE
Zerveas, G., Rekabsaz, N., & Eickhoff, C. (2023). Enhancing the Ranking Context of Dense Retrieval through Reciprocal Nearest Neighbors. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 10779–10803). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.665
Mendeley helps you to discover research relevant for your work.