Knowledge-Grounded Dialogue Generation with Term-level De-noising

11Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

Dialogue generation has been improved through injecting knowledge into generative models. However, addition of knowledge through simple selection of sentences or paragraphs is likely to introduce noise and diminish the effectiveness of the generative models. In this paper, we present a novel Knowledge Term Weighting Model (KTWM) that incorporates term-level de-noising of the selected knowledge. KTWM includes a module for generating Simulated Response Vectors (SRVs) and uses SRVs attention distributions with the knowledge embeddings to determine knowledge term weights. Our experiments demonstrate that KTWM, combined with various knowledge selection algorithms, consistently achieves statistically significant improvements over methods without term weighting when applied to two publicly available datasets Wizard of Wikipedia (Wiz) and Holl-E. The results are particularly improved for the Wiz test data with unseen topics, demonstrating the robustness of the KTWM noise-reduction approach.

Cite

CITATION STYLE

APA

Zheng, W., Milic-Frayling, N., & Zhou, K. (2021). Knowledge-Grounded Dialogue Generation with Term-level De-noising. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 2972–2983). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.262

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free