Text-Augmented Open Knowledge Graph Completion via Pre-Trained Language Models

7Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

Abstract

The mission of open knowledge graph (KG) completion is to draw new findings from known facts. Existing works that augment KG completion require either (1) factual triples to enlarge the graph reasoning space or (2) manually designed prompts to extract knowledge from a pre-trained language model (PLM), exhibiting limited performance and requiring expensive efforts from experts. To this end, we propose TAGREAL that automatically generates quality query prompts and retrieves support information from large text corpora to probe knowledge from PLM for KG completion. The results show that TAGREAL achieves state-of-the-art performance on two benchmark datasets. We find that TAGREAL has superb performance even with limited training data, outperforming existing embedding-based, graph-based, and PLM-based methods.

Cite

CITATION STYLE

APA

Jiang, P., Agarwal, S., Jin, B., Wang, X., Sun, J., & Han, J. (2023). Text-Augmented Open Knowledge Graph Completion via Pre-Trained Language Models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 11161–11180). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.709

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free