C2L: Causally Contrastive Learning for Robust Text Classification

45Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

Despite the super-human accuracy of recent deep models in NLP tasks, their robustness is reportedly limited due to their reliance on spurious patterns. We thus aim to leverage contrastive learning and counterfactual augmentation for robustness. For augmentation, existing work either requires humans to add counterfactuals to the dataset or machines to automatically matches near-counterfactuals already in the dataset. Unlike existing augmentation is affected by spurious correlations, ours, by synthesizing "a set"of counterfactuals, and making a collective decision on the distribution of predictions on this set, can robustly supervise the causality of each term. Our empirical results show that our approach, by collective decisions, is less sensitive to task model bias of attribution-based synthesis, and thus achieves significant improvements, in diverse dimensions: 1) counterfactual robustness, 2) cross-domain generalization, and 3) generalization from scarce data.

Cite

CITATION STYLE

APA

Choi, S., Jeong, M., Han, H., & Hwang, S. W. (2022). C2L: Causally Contrastive Learning for Robust Text Classification. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 10526–10534). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i10.21296

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free