Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous Pronouns

4Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Bias-measuring datasets play a critical role in detecting biased behavior of language models and in evaluating progress of bias mitigation methods. In this work, we focus on evaluating gender bias through coreference resolution, where previous datasets are either hand-crafted or fail to reliably measure an explicitly defined bias. To overcome these shortcomings, we propose a novel method to collect diverse, natural, and minimally distant text pairs via counterfactual generation, and construct Counter-GAP, an annotated dataset consisting of 4008 instances grouped into 1002 quadruples. We further identify a bias cancellation problem in previous group-level metrics on Counter-GAP, and propose to use the difference between inconsistency across genders and within genders to measure bias at a quadruple level. Our results show that four pre-trained language models are significantly more inconsistent across different gender groups than within each group, and that a name-based counterfactual data augmentation method is more effective to mitigate such bias than an anonymization-based method.

Cite

CITATION STYLE

APA

Xie, Z., Kocijan, V., Lukasiewicz, T., & Camburu, O. M. (2023). Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous Pronouns. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 3743–3755). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.eacl-main.272

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free