Distributionally Robust Finetuning BERT for Covariate Drift in Spoken Language Understanding

6Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.

Abstract

In this study, we investigate robustness against covariate drift in spoken language understanding (SLU). Covariate drift can occur in SLU when there is a drift between training and testing regarding what users request or how they request it. To study this we propose a method that exploits natural variations in data to create a covariate drift in SLU datasets. Experiments show that a state-of-the-art BERT-based model suffers performance loss under this drift. To mitigate the performance loss, we investigate distributionally robust optimization (DRO) for finetuning BERT-based models. We discuss some recent DRO methods, propose two new variants and empirically show that DRO improves robustness under drift.

Cite

CITATION STYLE

APA

Broscheit, S., Do, Q., & Gaspers, J. (2022). Distributionally Robust Finetuning BERT for Covariate Drift in Spoken Language Understanding. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 1970–1985). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.139

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free