Dual Learning for Dialogue State Tracking

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In task-oriented multi-turn dialogue systems, dialogue state refers to a compact representation of the user goal in the context of dialogue history. Dialogue state tracking (DST) is to estimate the dialogue state at each turn. Due to the dependency on complicated dialogue history contexts, DST data annotation is more expensive than single-sentence language understanding, which makes the task more challenging. In this work, we formulate DST as a sequence generation problem and propose a novel dual-learning framework to make full use of unlabeled data. In the dual-learning framework, there are two agents: the primal tracker agent (utterance-to-state generator) and the dual utterance generator agent (state-to-utterance generator). Compared with traditional supervised learning framework, dual learning can iteratively update both agents through the reconstruction error and reward signal respectively without labeled data. Reward sparsity problem is hard to solve in previous DST methods. In this work, the reformulation of DST as a sequence generation model effectively alleviates this problem. We call this primal tracker agent dual-DST. Experimental results on MultiWOZ2.1 dataset show that the proposed dual-DST works very well, especially when labelled data is limited. It achieves comparable performance to the system where labeled data is fully used.

Cite

CITATION STYLE

APA

Chen, Z., Chen, L., Zhao, Y., Zhu, S., & Yu, K. (2023). Dual Learning for Dialogue State Tracking. In Communications in Computer and Information Science (Vol. 1765 CCIS, pp. 293–305). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-99-2401-1_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free