Generative Model Using Knowledge Graph for Document-Grounded Conversations

5Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Document-grounded conversation (DGC) is a natural language generation task to generate fluent and informative responses by leveraging dialogue history and document(s). Recently, DGCs have focused on fine-tuning using pretrained language models. However, these approaches have a problem in that they must leverage the background knowledge under capacity constraints. For example, the maximum length of the input is limited to 512 or 1024 tokens. This problem is fatal in DGC because most documents are longer than the maximum input length. To address this problem, we propose a document-grounded generative model using a knowledge graph. The proposed model converts knowledge sentences extracted from the given document(s) into knowledge graphs and fine-tunes the pretrained model using the graph. We validated the effectiveness of the proposed model using a comparative experiment on the well-known Wizard-of-Wikipedia dataset. The proposed model outperformed the previous state-of-the-art model in our experiments on the Doc2dial dataset.

Cite

CITATION STYLE

APA

Kim, B., Lee, D., Kim, D., Kim, H., Kim, S., Kwon, O. W., & Kim, H. (2022). Generative Model Using Knowledge Graph for Document-Grounded Conversations. Applied Sciences (Switzerland), 12(7). https://doi.org/10.3390/app12073367

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free