Abstract
Recently a variety of LSTM-based conditional language models (LM) have been applied across a range of language generation tasks. In this work we study various model architectures and different ways to represent and aggregate the source information in an end-to-end neural dialogue system framework. A method called snapshot learning is also proposed to facilitate learning from supervised sequential signals by applying a companion cross-entropy objective function to the conditioning vector. The experimental and analytical results demonstrate firstly that competition occurs between the conditioning vector and the LM, and the differing architectures provide different trade-offs between the two. Secondly, the discriminative power and transparency of the conditioning vector is key to providing both model interpretability and better performance. Thirdly, snapshot learning leads to consistent performance improvements independent of which architecture is used.
Cite
CITATION STYLE
Wen, T. H., Gašic, M., Mrkšic, N., Rojas-Barahona, L. M., Su, P. H., Ultes, S., … Young, S. (2016). Conditional generation and snapshot learning in neural dialogue systems. In EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2153–2162). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d16-1233
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.