NLP models often rely on superficial cues known as dataset biases to achieve impressive performance, and can fail on examples where these biases do not hold. Recent work sought to develop robust, unbiased models by filtering biased examples from training sets. In this work, we argue that such filtering can obscure the true capabilities of models to overcome biases, which might never be removed in full from the dataset. We suggest that in order to drive the development of models robust to subtle biases, dataset biases should be amplified in the training set. We introduce an evaluation framework defined by a bias-amplified training set and an anti-biased test set, both automatically extracted from existing datasets. Experiments across three notions of bias, four datasets and two models show that our framework is substantially more challenging for models than the original data splits, and even more challenging than hand-crafted challenge sets. Our evaluation framework can use any existing dataset, even those considered obsolete, to test model robustness. We hope our work will guide the development of robust models that do not rely on superficial biases and correlations. To this end, we publicly release our code and data.
CITATION STYLE
Reif, Y., & Schwartz, R. (2023). Fighting Bias with Bias: Promoting Model Robustness by Amplifying Dataset Biases. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 13169–13189). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.833
Mendeley helps you to discover research relevant for your work.