Exploring personalized neural conversational models

44Citations
Citations of this article
89Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Modeling dialog systems is currently one of the most active problems in Natural Language Processing. Recent advances in Deep Learning have sparked an interest in the use of neural networks in modeling language, particularly for personalized conversational agents that can retain contextual information during dialog exchanges. This work carefully explores and compares several of the recently proposed neural conversation models, and carries out a detailed evaluation on the multiple factors that can significantly affect predictive performance, such as pretraining, embedding training, data cleaning, diversity-based reranking, evaluation setting, etc. Based on the tradeoffs of different models, we propose a new neural generative dialog model conditioned on speakers as well as context history that outperforms previous models on both retrieval and generative metrics. Our findings indicate that pretraining speaker embeddings on larger datasets, as well as bootstrapping word and speaker embeddings, can significantly improve performance (up to 3 points in perplexity), and that promoting diversity in using Mutual Information based techniques has a very strong effect in ranking metrics.

Cite

CITATION STYLE

APA

Kottur, S., Wang, X., & Carvalho, V. R. (2017). Exploring personalized neural conversational models. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 0, pp. 3728–3734). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/521

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free