We model coherent conversation continuation via RNN-based dialogue models equipped with a dynamic attention mechanism. Our attention-RNN language model dynamically increases the scope of attention on the history as the conversation continues, as opposed to standard attention (or alignment) models with a fixed input scope in a sequence-to-sequence model. This allows each generated word to be associated with the most relevant words in its corresponding conversation history. We evaluate the model on two popular dialogue datasets, the open-domain MovieTriples dataset and the closed-domain Ubuntu Troubleshoot dataset, and achieve significant improvements over the state-of-the-art and baselines on several metrics, including complementary diversity-based metrics, human evaluation, and qualitative visualizations. We also show that a vanilla RNN with dynamic attention outperforms more complex memory models (e.g., LSTM and GRU) by allowing for flexible, long-distance memory. We promote further coherence via topic modeling-based reranking.
CITATION STYLE
Mei, H., Bansal, M., & Walter, M. R. (2017). Coherent dialogue with attention-based language models. In 31st AAAI Conference on Artificial Intelligence, AAAI 2017 (pp. 3252–3258). AAAI press. https://doi.org/10.1609/aaai.v31i1.10961
Mendeley helps you to discover research relevant for your work.