PLATO-KAG: Unsupervised Knowledge-Grounded Conversation via Joint Modeling

10Citations
Citations of this article
51Readers
Mendeley users who have this article in their library.

Abstract

Large-scale conversation models are turning to leveraging external knowledge to improve the factual accuracy in response generation. Considering the infeasibility to annotate the external knowledge for large-scale dialogue corpora, it is desirable to learn the knowledge selection and response generation in an unsupervised manner. In this paper, we propose PLATO-KAG (Knowledge-Augmented Generation), an unsupervised learning approach for end-to-end knowledge-grounded conversation modeling. For each dialogue context, the top-k relevant knowledge elements are selected and then employed in knowledge-grounded response generation. The two components of knowledge selection and response generation are optimized jointly and effectively under a balanced objective. Experimental results on two publicly available datasets validate the superiority of PLATO-KAG.

Cite

CITATION STYLE

APA

Huang, X., He, H., Bao, S., Wang, F., Wu, H., & Wang, H. (2021). PLATO-KAG: Unsupervised Knowledge-Grounded Conversation via Joint Modeling. In NLP for Conversational AI, NLP4ConvAI 2021 - Proceedings of the 3rd Workshop (pp. 143–154). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.nlp4convai-1.14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free