Looking at the Overlooked: An Analysis on the Word-Overlap Bias in Natural Language Inference

4Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

It has been shown that NLI models are usually biased with respect to the word-overlap between premise and hypothesis; they take this feature as a primary cue for predicting the entailment label. In this paper, we focus on an overlooked aspect of the overlap bias in NLI models: the reverse word-overlap bias. Our experimental results demonstrate that current NLI models are highly biased towards the non-entailment label on instances with low overlap, and the existing debiasing methods, which are reportedly successful on existing challenge datasets, are generally ineffective in addressing this category of bias. We investigate the reasons for the emergence of the overlap bias and the role of minority examples in its mitigation. For the former, we find that the word-overlap bias does not stem from pre-training, and for the latter, we observe that in contrast to the accepted assumption, eliminating minority examples does not affect the generalizability of debiasing methods with respect to the overlap bias. All the code and relevant data are available at: https://github.com/sara-rajaee/reverse_bias.

Cite

CITATION STYLE

APA

Rajaee, S., Yaghoobzadeh, Y., & Pilehvar, M. T. (2022). Looking at the Overlooked: An Analysis on the Word-Overlap Bias in Natural Language Inference. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 10605–10616). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.725

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free