Abstract
Recent task-oriented dialogue systems are trained on annotated dialogues, which, in turn, reflect certain domain information (e.g., restaurants or hotels in a given region). However, when such domain knowledge changes (e.g., new restaurants open), the initial dialogue model may become obsolete, decreasing the overall performance of the system. Through a number of experiments, we show, for instance, that adding 50% of new slot-values reduces of about 55% the dialogue state-tracker performance. In light of such evidence, we suggest that automatic adaptation of training dialogues is a valuable option for re-training obsolete models. We experimented with a dialogue adaptation approach based on fine-tuning a generative language model on domain changes, showing that a significant reduction of performance decrease can be obtained.
Cite
CITATION STYLE
Labruna, T., & Magnini, B. (2023). Addressing Domain Changes in Task-oriented Conversational Agents through Dialogue Adaptation. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Student Research Workshop (pp. 149–158). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.eacl-srw.16
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.