Contrastive Fine-tuning Improves Robustness for Neural Rankers

17Citations
Citations of this article
76Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The performance of state-of-the-art neural rankers can deteriorate substantially when exposed to noisy inputs or applied to a new domain. In this paper, we present a novel method for fine-tuning neural rankers that can significantly improve their robustness to out-of-domain data and query perturbations. Specifically, a contrastive loss that compares data points in the representation space is combined with the standard ranking loss during fine-tuning. We use relevance labels to denote similar/dissimilar pairs, which allows the model to learn the underlying matching semantics across different query-document pairs and leads to improved robustness. In experiments with four passage ranking datasets, the proposed contrastive fine-tuning method obtains improvements on robustness to query reformulations, noise perturbations, and zero-shot transfer for both BERT and BART-based rankers. Additionally, our experiments show that contrastive fine-tuning outperforms data augmentation for robustifying neural rankers.

Cite

CITATION STYLE

APA

Ma, X., dos Santos, C. N., & Arnold, A. O. (2021). Contrastive Fine-tuning Improves Robustness for Neural Rankers. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 570–582). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.51

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free