Improving knowledge-aware dialogue response generation by using human-written prototype dialogues

17Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.

Abstract

Incorporating commonsense knowledge can alleviate the issue of generating generic responses in open-domain generative dialogue systems. However, selecting knowledge facts for the dialogue context is still a challenge. The widely used approach Entity Name Matching always retrieves irrelevant facts from the view of local entity words. This paper proposes a novel knowledge selection approach, Prototype-KR, and a knowledge-aware generative model, Prototype-KRG. Given a query, our approach first retrieves a set of prototype dialogues that are relevant to the query. We find knowledge facts used in prototype dialogues usually are highly relevant to the current query; thus, Prototype-KR ranks such knowledge facts based on the semantic similarity and then selects the most appropriate facts. Subsequently, Prototype-KRG can generate an informative response using the selected knowledge facts. Experiments demonstrate that our approach has achieved notable improvements on the most metrics, compared to generative baselines. Meanwhile, compared to IR(Retrieval)-based baselines, responses generated by our approach are more relevant to the context and have comparable informativeness.

Cite

CITATION STYLE

APA

Wu, S., Li, Y., Zhang, D., & Wu, Z. (2020). Improving knowledge-aware dialogue response generation by using human-written prototype dialogues. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 1402–1411). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.126

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free