Towards Need-Based Spoken Language Understanding Model Updates: What Have We Learned?

0Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In productionized machine learning systems, online model performance is known to deteriorate over time when there is a distributional drift between offline training and online application data. As a remedy, models are typically retrained at fixed time intervals, implying high computational and manual costs. This work aims at decreasing such costs in productionized, large-scale Spoken Language Understanding systems. In particular, we develop a need-based re-training strategy guided by an efficient drift detector and discuss the arising challenges including system complexity, overlapping model releases, observation limitation and the absence of annotated resources at runtime. We present empirical results on historical data and confirm the utility of our design decisions via an online A/B experiment.

Cite

CITATION STYLE

APA

Do, Q., Gaspers, J., Sorokin, D., & Lehnen, P. (2022). Towards Need-Based Spoken Language Understanding Model Updates: What Have We Learned? In EMNLP 2022 - Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track (pp. 131–137). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-industry.11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free