Simple, fast, accurate intent classification and slot labeling for goal-oriented dialogue systems

27Citations
Citations of this article
98Readers
Mendeley users who have this article in their library.

Abstract

With the advent of conversational assistants like Amazon Alexa, Google Now, etc., dialogue systems are gaining a lot of traction, especially in industrial settings. These systems typically include a Spoken Language understanding component which consists of two tasks: Intent Classification (IC) and Slot Labeling (SL). Generally, these two tasks are modeled together jointly to achieve best performance. However, this joint modeling adds to model obfuscation. In this work, we first design framework for a modularization of joint IC-SL task to enhance architecture transparency. Then, we explore a number of self-attention, convolutional, and recurrent models, contributing a large-scale analysis of modeling paradigms for IC+SL across two datasets. Finally, using this framework, we propose a class of ‘label-recurrent’ models that are non-recurrent apart from a 10-dimensional representation of the label history, and show that our proposed systems are highly accurate (achieving over 30% error reduction in SL over the state-of-the-art on the Snips dataset), as well as fast, at 2x the inference and 2/3 to 1/2 the training time of comparable recurrent models, thus giving an edge in critical real-world systems.

Cite

CITATION STYLE

APA

Gupta, A., Hewitt, J., & Kirchhoff, K. (2019). Simple, fast, accurate intent classification and slot labeling for goal-oriented dialogue systems. In SIGDIAL 2019 - 20th Annual Meeting of the Special Interest Group Discourse Dialogue - Proceedings of the Conference (pp. 46–55). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-5906

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free