JAKET: Joint Pre-training of Knowledge Graph and Language Understanding

74Citations
Citations of this article
142Readers
Mendeley users who have this article in their library.

Abstract

Knowledge graphs (KGs) contain rich information about world knowledge, entities, and relations. Thus, they can be great supplements to existing pre-trained language models. However, it remains a challenge to efficiently integrate information from KG into language modeling. And the understanding of a knowledge graph requires related context. We propose a novel joint pre-training framework, JAKET, to model both the knowledge graph and language. The knowledge module and language module provide essential information to mutually assist each other: the knowledge module produces embeddings for entities in text while the language module generates context-aware initial embeddings for entities and relations in the graph. Our design enables the pre-trained model to easily adapt to unseen knowledge graphs in new domains. Experiment results on several knowledge-aware NLP tasks show that our proposed framework achieves superior performance by effectively leveraging knowledge in language understanding.

Cite

CITATION STYLE

APA

Yu, D., Zhu, C., Yang, Y., & Zeng, M. (2022). JAKET: Joint Pre-training of Knowledge Graph and Language Understanding. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 11630–11638). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i10.21417

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free