CodeKGC: Code Language Model for Generative Knowledge Graph Construction

16Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Current generative knowledge graph construction approaches usually fail to capture structural knowledge by simply flattening natural language into serialized texts or a specification language. However, large generative language model trained on structured data such as code has demonstrated impressive capability in understanding natural language for structural prediction and reasoning tasks. Intuitively, we address the task of generative knowledge graph construction with code language model: given a code-format natural language input, the target is to generate triples which can be represented as code completion tasks. Specifically, we develop schema-aware prompts that effectively utilize the semantic structure within the knowledge graph. As code inherently possesses structure, such as class and function definitions, it serves as a useful model for prior semantic structural knowledge. Furthermore, we employ a rationale-enhanced generation method to boost the performance. Rationales provide intermediate steps, thereby improving knowledge extraction abilities. Experimental results indicate that the proposed approach can obtain better performance on benchmark datasets compared with baselines.

Cite

CITATION STYLE

APA

Bi, Z., Chen, J., Jiang, Y., Xiong, F., Guo, W., Chen, H., & Zhang, N. (2024). CodeKGC: Code Language Model for Generative Knowledge Graph Construction. ACM Transactions on Asian and Low-Resource Language Information Processing, 23(3). https://doi.org/10.1145/3641850

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free