Exploring In-Context Learning for Knowledge Grounded Dialog Generation

5Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Large neural-based dialog generation models have been applied in many real-life scenarios, yet they are prone to hallucination and tend to produce factually inaccurate outputs which raise great concerns. To alleviate this problem, we propose a plug-and-play retrieval-based framework IKA, which leverages in-context learning and retrieval techniques to enhance LLMs on knowledge grounded dialog generation. We design thorough experiments on a large-scale knowledge graph with 1M+ facts (Moon et al., 2019) to investigate the effectiveness and generalization of our framework. Experiments show that our method surpasses previous training-based SOTA by a large margin, specifically 46.67% in BLEU4, 26.01% in ROUGE-L, 122.90% in BARTScore and 30.50% in Entity Coverage F1. Further analysis shows promising abilities of LLMs to perform knowledge-intensive tasks, which is previously considered weak and understudied.

Cite

CITATION STYLE

APA

Chen, Q., Wu, W., & Li, S. (2023). Exploring In-Context Learning for Knowledge Grounded Dialog Generation. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 10071–10081). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.675

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free