An Effective Non-Autoregressive Model for Spoken Language Understanding

17Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Spoken Language Understanding (SLU), a core component of the task-oriented dialogue system, expects a shorter inference latency due to the impatience of humans. Non-autoregressive SLU models clearly increase the inference speed but suffer uncoordinated-slot problems caused by the lack of sequential dependency information among each slot chunk. To gap this shortcoming, in this paper, we propose a novel non-autoregressive SLU model named Layered-Refine Transformer, which contains a Slot Label Generation (SLG) task and a Layered Refine Mechanism (LRM). SLG is defined as generating the next slot label with the token sequence and generated slot labels. With SLG, the non-autoregressive model can efficiently obtain dependency information during training and spend no extra time in inference. LRM predicts the preliminary SLU results from Transformer's middle states and utilizes them to guide the final prediction. Experiments on two public datasets indicate that our model significantly improves SLU performance (1.5% on Overall accuracy) while substantially speed up (more than 10 times) the inference process over the state-of-the-art baseline.

Cite

CITATION STYLE

APA

Cheng, L., Jia, W., & Yang, W. (2021). An Effective Non-Autoregressive Model for Spoken Language Understanding. In International Conference on Information and Knowledge Management, Proceedings (pp. 241–250). Association for Computing Machinery. https://doi.org/10.1145/3459637.3482229

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free