Towards Informative Open-ended Text Generation with Dynamic Knowledge Triples

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Pretrained language models (PLMs), especially large language models (LLMs) demonstrate impressive capabilities in open-ended text generation. While our statistical results show that LLMs often suffer from over-concentrated information, where the generated texts overly focus on the given prompt and fail to provide sufficient background and detailed information as humans do. To address this issue, we propose a dynamic knowledge-guided informative open-ended text generation approach, that utilizes a knowledge graph to help the model generate more contextually related entities and detailed facts. Specifically, we first employ a local knowledge filter to extract relevant knowledge from the comprehensive knowledge graph for a given topic sentence. Then we introduce a dynamic knowledge selector to predict the entity to be mentioned in the subsequent sentence. Finally, we utilize a knowledge-enhanced text generator to produce a more informative output. To evaluate the effectiveness of our approach, we evaluate the proposed approach in two scenarios: fine-tuning for small PLMs and prompt tuning for LLMs. Experimental results show that our approach could generate more informative texts than baselines.

Cite

CITATION STYLE

APA

Ren, Z., Zhao, Y., & Zong, C. (2023). Towards Informative Open-ended Text Generation with Dynamic Knowledge Triples. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 3189–3203). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.210

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free