Learning to control the specificity in neural response generation

65Citations
Citations of this article
180Readers
Mendeley users who have this article in their library.

Abstract

In conversation, a general response (e.g., “I don't know”) could correspond to a large variety of input utterances. Previous generative conversational models usually employ a single model to learn the relationship between different utterance-response pairs, thus tend to favor general and trivial responses which appear frequently. To address this problem, we propose a novel controlled response generation mechanism to handle different utterance-response relationships in terms of specificity. Specifically, we introduce an explicit specificity control variable into a sequence-to-sequence model, which interacts with the usage representation of words through a Gaussian Kernel layer, to guide the model to generate responses at different specificity levels. We describe two ways to acquire distant labels for the specificity control variable in learning. Empirical studies show that our model can significantly outperform the state-of-the-art response generation models under both automatic and human evaluations.

Cite

CITATION STYLE

APA

Zhang, R., Guo, J., Fan, Y., Lan, Y., Xu, J., & Cheng, X. (2018). Learning to control the specificity in neural response generation. In ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) (Vol. 1, pp. 1108–1117). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p18-1102

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free