Can We Improve Model Robustness through Secondary Attribute Counterfactuals?

10Citations
Citations of this article
60Readers
Mendeley users who have this article in their library.

Abstract

Developing robust NLP models that perform well on many, even small, slices of data is a significant but important challenge, with implications from fairness to general reliability. To this end, recent research has explored how models rely on spurious correlations, and how counterfactual data augmentation (CDA) can mitigate such issues. In this paper we study how and why modeling counterfactuals over multiple attributes can go significantly further in improving model performance. We propose RDI, a context-aware methodology which takes into account the impact of secondary attributes on the model's predictions and increases sensitivity for secondary attributes over reweighted counterfactually augmented data. By implementing RDI in the context of toxicity detection, we find that accounting for secondary attributes can significantly improve robustness, with improvements in sliced accuracy on the original dataset up to 7% compared to existing robustness methods. We also demonstrate that RDI generalizes to the coreference resolution task and provide guidelines to extend this to other tasks.

Cite

CITATION STYLE

APA

Balashankar, A., Wang, X., Packer, B., Thain, N., & AlexBeutel, E. H. C. (2021). Can We Improve Model Robustness through Secondary Attribute Counterfactuals? In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 4701–4712). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.386

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free