Multi-task learning of system dialogue act selection for supervised pretraining of goal-oriented dialogue policies

5Citations
Citations of this article
65Readers
Mendeley users who have this article in their library.

Abstract

This paper describes the use of Multi-Task Neural Networks (NNs) for system dialogue act selection. These models leverage the representations learned by the Natural Language Understanding (NLU) unit to enable robust initialization/bootstrapping of dialogue policies from medium sized initial data sets. We evaluate the models on two goal-oriented dialogue corpora in the travel booking domain. Results show the proposed models improve over models trained without knowledge of NLU tasks.

Cite

CITATION STYLE

APA

McLeod, S., Kruijff-Korbayová, I., & Kiefer, B. (2019). Multi-task learning of system dialogue act selection for supervised pretraining of goal-oriented dialogue policies. In SIGDIAL 2019 - 20th Annual Meeting of the Special Interest Group Discourse Dialogue - Proceedings of the Conference (pp. 411–417). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-5947

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free