Knowledge Enhanced Fine-Tuning for Better Handling Unseen Entities in Dialogue Generation

N/ACitations
Citations of this article
76Readers
Mendeley users who have this article in their library.

Abstract

Although pre-training models have achieved great success in dialogue generation, their performance drops dramatically when the input contains an entity that does not appear in pre-training and fine-tuning datasets (unseen entity). To address this issue, existing methods leverage an external knowledge base to generate appropriate responses. In real-world scenario, the entity may not be included by the knowledge base or suffer from the precision of knowledge retrieval. To deal with this problem, instead of introducing knowledge base as the input, we force the model to learn a better semantic representation by predicting the information in the knowledge base, only based on the input context. Specifically, with the help of a knowledge base, we introduce two auxiliary training objectives: 1) Interpret Masked Word, which conjectures the meaning of the masked entity given the context; 2) Hypernym Generation, which predicts the hypernym of the entity based on the context. Experiment results on two dialogue corpus verify the effectiveness of our methods under both knowledge available and unavailable settings.

Cite

CITATION STYLE

APA

Cui, L., Wu, Y., Liu, S., & Zhang, Y. (2021). Knowledge Enhanced Fine-Tuning for Better Handling Unseen Entities in Dialogue Generation. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2328–2337). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.179

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free