On the Limits of Minimal Pairs in Contrastive Evaluation

11Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

Minimal sentence pairs are frequently used to analyze the behavior of language models. It is often assumed that model behavior on contrastive pairs is predictive of model behavior at large. We argue that two conditions are necessary for this assumption to hold: First, a tested hypothesis should be well-motivated, since experiments show that contrastive evaluation can lead to false positives. Secondly, test data should be chosen such as to minimize distributional discrepancy between evaluation time and deployment time. For a good approximation of deployment-time decoding, we recommend that minimal pairs are created based on machine-generated text, as opposed to human-written references. We present a contrastive evaluation suite for English–German MT that implements this recommendation.

Cite

CITATION STYLE

APA

Vamvas, J., & Sennrich, R. (2021). On the Limits of Minimal Pairs in Contrastive Evaluation. In BlackboxNLP 2021 - Proceedings of the 4th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (pp. 58–68). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.blackboxnlp-1.5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free