Interventional Training for Out-Of-Distribution Natural Language Understanding

4Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Out-of-distribution (OOD) settings are used to measure a model's performance when the distribution of the test data is different from that of the training data. NLU models are known to suffer in OOD settings (Utama et al., 2020b). We study this issue from the perspective of causality, which sees confounding bias as the reason for models to learn spurious correlations. While a common solution is to perform intervention, existing methods handle only known and single confounder (Pearl and Mackenzie, 2018), but in many NLU tasks the confounders can be both unknown and multifactorial. In this paper, we propose a novel interventional training method called Bottom-up Automatic Intervention (BAI) that performs multi-granular intervention with identified multifactorial confounders. Our experiments on three NLU tasks, namely, natural language inference, fact verification and paraphrase identification, show the effectiveness of BAI for tackling different OOD settings.

Cite

CITATION STYLE

APA

Yu, S., Jiang, J., Zhang, H., Niu, Y., Sun, Q., & Bing, L. (2022). Interventional Training for Out-Of-Distribution Natural Language Understanding. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 11627–11638). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.799

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free