Semi-supervised bootstrapping of dialogue state trackers for task-oriented modelling

6Citations
Citations of this article
97Readers
Mendeley users who have this article in their library.

Abstract

Dialogue systems benefit greatly from optimizing on detailed annotations, such as transcribed utterances, internal dialogue state representations and dialogue act labels. However, collecting these annotations is expensive and time-consuming, holding back development in the area of dialogue modelling. In this paper, we investigate semi-supervised learning methods that are able to reduce the amount of required intermediate labelling. We find that by leveraging un-annotated data instead, the amount of turn-level annotations of dialogue state can be significantly reduced when building a neural dialogue system. Our analysis on the MultiWOZ corpus, covering a range of domains and topics, finds that annotations can be reduced by up to 30% while maintaining equivalent system performance. We also describe and evaluate the first end-to-end dialogue model created for the MultiWOZ corpus.

Cite

CITATION STYLE

APA

Tseng, B. H., Rei, M., Budzianowski, P., Turner, R. E., Byrne, B., & Korhonen, A. (2019). Semi-supervised bootstrapping of dialogue state trackers for task-oriented modelling. In EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 1273–1278). Association for Computational Linguistics. https://doi.org/10.18653/v1/d19-1125

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free