Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System

79Citations
Citations of this article
126Readers
Mendeley users who have this article in their library.

Abstract

Pre-trained language models have been recently shown to benefit task-oriented dialogue (TOD) systems. Despite their success, existing methods often formulate this task as a cascaded generation problem which can lead to error accumulation across different sub-tasks and greater data annotation overhead. In this study, we present PPTOD, a unified plug-and-play model for task-oriented dialogue. In addition, we introduce a new dialogue multi-task pre-training strategy that allows the model to learn the primary TOD task completion skills from heterogeneous dialog corpora. We extensively test our model on three benchmark TOD tasks, including end-to-end dialogue modelling, dialogue state tracking, and intent classification. Experimental results show that PPTOD achieves new state of the art on all evaluated tasks in both high-resource and low-resource scenarios. Furthermore, comparisons against previous SOTA methods show that the responses generated by PPTOD are more factually correct and semantically coherent as judged by human annotators.

Cite

CITATION STYLE

APA

Su, Y., Shu, L., Mansimov, E., Gupta, A., Cai, D., Lai, Y. A., & Zhang, Y. (2022). Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 4661–4676). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.319

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free