Abstract
Intent detection and slot filling are two main tasks for building a spoken language understanding( SLU) system. Multiple deep learning based models have demonstrated good results on these tasks . The most effective algorithms are based on the structures of sequence to sequence models (or "encoder-decoder" models), and generate the intents and semantic tags either using separate models((Yao et al., 2014; Mesnil et al., 2015; Peng and Yao, 2015; Kurata et al., 2016; Hahn et al., 2011)) or a joint model ((Liu and Lane, 2016a; Hakkani- Tür et al., 2016; Guo et al., 2014)). Most of the previous studies, however, either treat the intent detection and slot filling as two separate parallel tasks, or use a sequence to sequence model to generate both semantic tags and intent. Most of these approaches use one (joint) NN based model (including encoderdecoder structure) to model two tasks, hence may not fully take advantage of the crossimpact between them. In this paper, new Bi-model based RNN semantic frame parsing network structures are designed to perform the intent detection and slot filling tasks jointly, by considering their cross-impact to each other using two correlated bidirectional LSTMs (BLSTM). Our Bi-model structure with a decoder achieves state-of-the-art result on the benchmark ATIS data (Hemphill et al., 1990; Tur et al., 2010), with about 0.5% intent accuracy improvement and 0.9 % slot filling improvement.
Cite
CITATION STYLE
Wang, Y., Shen, Y., & Jin, H. (2018). A bi-model based RNN semantic frame parsing model for intent detection and slot filling. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference (Vol. 2, pp. 309–314). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-2050
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.