Spoken Language Understanding (SLU) models in industry applications are usually trained offline on historic data, but have to perform well on incoming user requests after deployment. Since the application data is not available at training time, this is formally similar to the domain generalization problem, where domains correspond to different temporal segments of the data, and the goal is to build a model that performs well on unseen domains, e.g., upcoming data. In this paper, we explore different strategies for achieving good temporal generalization, including instance weighting, temporal fine-tuning, learning temporal features and building a temporally-invariant model. Our results on data of large-scale SLU systems show that temporal information can be leveraged to improve temporal generalization for SLU models.
CITATION STYLE
Gaspers, J., Kumar, A., Steeg, G. V., & Galstyan, A. (2022). Temporal Generalization for Spoken Language Understanding. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Industry Papers (pp. 37–44). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-industry.5
Mendeley helps you to discover research relevant for your work.