Are training samples correlated? Learning to generate dialogue responses with multiple references

36Citations
Citations of this article
159Readers
Mendeley users who have this article in their library.

Abstract

Due to its potential applications, open-domain dialogue generation has become popular and achieved remarkable progress in recent years, but sometimes suffers from generic responses. Previous models are generally trained based on 1-to-1 mapping from an input query to its response, which actually ignores the nature of 1-to-n mapping in dialogue that there may exist multiple valid responses corresponding to the same query. In this paper, we propose to utilize the multiple references by considering the correlation of different valid responses and modeling the 1-to-n mapping with a novel two-step generation architecture. The first generation phase extracts the common features of different responses which, combined with distinctive features obtained in the second phase, can generate multiple diverse and appropriate responses. Experimental results show that our proposed model can effectively improve the quality of response and outperform existing neural dialogue models on both automatic and human evaluations.

Cite

CITATION STYLE

APA

Qiu, L., Li, J., Bi, W., Zhao, D., & Yan, R. (2020). Are training samples correlated? Learning to generate dialogue responses with multiple references. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 3826–3835). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1372

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free