More is Better: Enhancing Open-Domain Dialogue Generation via Multi-Source Heterogeneous Knowledge

24Citations
Citations of this article
70Readers
Mendeley users who have this article in their library.

Abstract

Despite achieving remarkable performance, previous knowledge-enhanced works usually only use a single-source homogeneous knowledge base of limited knowledge coverage. Thus, they often degenerate into traditional methods because not all dialogues can be linked with knowledge entries. This paper proposes a novel dialogue generation model, MSKE-Dialog, to solve this issue with three unique advantages: (1) Rather than only one, MSKE-Dialog can simultaneously leverage multiple heterogeneous knowledge sources (it includes but is not limited to commonsense knowledge facts, text knowledge, infobox knowledge) to improve the knowledge coverage; (2) To avoid the topic conflict among the context and different knowledge sources, we propose a Multi-Reference Selection to better select context/knowledge; (3) We propose a Multi-Reference Generation to generate informative responses by referring to multiple generation references at the same time. Extensive evaluations on a Chinese dataset show the superior performance of this work against various state-of-the-art approaches. To our best knowledge, this work is the first to use the multi-source heterogeneous knowledge in the open-domain knowledge-enhanced dialogue generation.

Cite

CITATION STYLE

APA

Wu, S., Li, Y., Wang, M., Zhang, D., Zhou, Y., & Wu, Z. (2021). More is Better: Enhancing Open-Domain Dialogue Generation via Multi-Source Heterogeneous Knowledge. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2286–2300). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.175

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free