Regression-Free Model Updates for Spoken Language Understanding

1Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In real-world systems, an important requirement for model updates is to avoid regressions in user experience caused by fips of previously correct classifcations to incorrect ones. Multiple techniques for that have been proposed in the recent literature. In this paper, we apply one such technique, focal distillation, to model updates in a goal-oriented dialog system and assess its usefulness in practice. In particular, we evaluate its effectiveness for key language understanding tasks, including sentence classi-fcation and sequence labeling tasks, we further assess its effect when applied to repeated model updates over time, and test its compatibility with mislabeled data. Our experiments on a public benchmark and data from a deployed dialog system demonstrate that focal distillation can substantially reduce regressions, at only minor drops in accuracy, and that it further outperforms naive supervised training in challenging mislabeled data and label expansion settings.

Cite

CITATION STYLE

APA

Caciolai, A., Weber, V., Falke, T., Pedrani, A., & Bernardi, D. (2023). Regression-Free Model Updates for Spoken Language Understanding. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 5, pp. 538–551). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-industry.52

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free