Unsupervised embedding enhancements of knowledge graphs using textual associations

25Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

Abstract

Knowledge graph embeddings are instrumental for representing and learning from multi-relational data, with recent embedding models showing high effectiveness for inferring new facts from existing databases. However, such precisely structured data is usually limited in quantity and in scope. Therefore, to fully optimize the embeddings it is important to also consider more widely available sources of information such as text. This paper describes an unsupervised approach to incorporate textual information by augmenting entity embeddings with embeddings of associated words. The approach does not modify the optimization objective for the knowledge graph embedding, which allows it to be integrated with existing embedding models. Two distinct forms of textual data are considered, with different embedding enhancements proposed for each case. In the first case, each entity has an associated text document that describes it. In the second case, a text document is not available, and instead entities occur as words or phrases in an unstructured corpus of text fragments. Experiments show that both methods can offer improvement on the link prediction task when applied to many different knowledge graph embedding models.

Cite

CITATION STYLE

APA

Veira, N., Keng, B., Padmanabhan, K., & Veneris, A. (2019). Unsupervised embedding enhancements of knowledge graphs using textual associations. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 5218–5225). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/725

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free