Learning Dialogue History for Spoken Language Understanding

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In task-oriented dialogue systems, spoken language understanding (SLU) aims to convert users’ queries expressed by natural language to structured representations. SLU usually consists of two parts, namely intent identification and slot filling. Although many methods have been proposed for SLU, these methods generally process each utterance individually, which loses context information in dialogues. In this paper, we propose a hierarchical LSTM based model for SLU. The dialogue history is memorized by a turn-level LSTM and it is used to assist the prediction of intent and slot tags. Consequently, the understanding of the current turn is dependent on the preceding turns. We conduct experiments on the NLPCC 2018 Shared Task 4 dataset. The results demonstrate that the dialogue history is effective for SLU and our model outperforms all baselines.

Cite

CITATION STYLE

APA

Zhang, X., Ma, D., & Wang, H. (2018). Learning Dialogue History for Spoken Language Understanding. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11108 LNAI, pp. 120–132). Springer Verlag. https://doi.org/10.1007/978-3-319-99495-6_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free