In this paper, we present a novel approach to natural language understanding that utilizes context-free grammars (CFGs) in conjunction with sequence-to-sequence (seq2seq) deep learning. Specifically, we take a CFG authored to generate dialogue for our target application, a videogame, and train a long short-term memory (LSTM) recurrent neural network (RNN) to translate the surface utterances that it produces to traces of the grammatical expansions that yielded them. Critically, we already annotated the symbols in this grammar for the semantic and pragmatic considerations that our game’s dialogue manager operates over, allowing us to use the grammatical trace associated with any surface utterance to infer such information. From preliminary offline evaluation, we show that our RNN translates utterances to grammatical traces (and thereby meaning representations) with great accuracy.
CITATION STYLE
Ryan, J., Summerville, A. J., Mateas, M., & Wardrip-Fruin, N. (2016). Translating player dialogue into meaning representations using LSTMs. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10011 LNAI, pp. 383–386). Springer Verlag. https://doi.org/10.1007/978-3-319-47665-0_38
Mendeley helps you to discover research relevant for your work.