End-to-End Self-Debiasing Framework for Robust NLU Training

31Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Existing Natural Language Understanding (NLU) models have been shown to incorporate dataset biases leading to strong performance on in-distribution (ID) test sets but poor performance on out-of-distribution (OOD) ones. We introduce a simple yet effective debiasing framework whereby the shallow representations of the main model are used to derive a bias model and both models are trained simultaneously. We demonstrate on three well studied NLU tasks that despite its simplicity, our method leads to competitive OOD results. It significantly outperforms other debiasing approaches on two tasks, while still delivering high in-distribution performance.

Cite

CITATION STYLE

APA

Ghaddar, A., Langlais, P., Rezagholizadeh, M., & Rashid, A. (2021). End-to-End Self-Debiasing Framework for Robust NLU Training. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 1923–1929). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.168

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free