An Investigation of the (In)effectiveness of Counterfactually Augmented Data

29Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

While pretrained language models achieve excellent performance on natural language understanding benchmarks, they tend to rely on spurious correlations and generalize poorly to out-of-distribution (OOD) data. Recent work has explored using counterfactually-augmented data (CAD)-data generated by minimally perturbing examples to flip the ground-truth label-to identify robust features that are invariant under distribution shift. However, empirical results using CAD during training for OOD generalization have been mixed. To explain this discrepancy, through a toy theoretical example and empirical analysis on two crowdsourced CAD datasets, we show that: (a) while features perturbed in CAD are indeed robust features, it may prevent the model from learning unperturbed robust features; and (b) CAD may exacerbate existing spurious correlations in the data. Our results thus show that the lack of perturbation diversity limits CAD's effectiveness on OOD generalization, calling for innovative crowdsourcing procedures to elicit diverse perturbation of examples.

Cite

CITATION STYLE

APA

Joshi, N., & He, H. (2022). An Investigation of the (In)effectiveness of Counterfactually Augmented Data. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 3668–3681). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.256

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free