Text-augmented knowledge representation learning based on convolutional network

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Knowledge graphs describe concepts, entities in the objective world and relations in a structured form, thus providing a better way to manage and understand the infinite information on the Internet. Although there are various knowledge embedding models, most of them only focus on factual triples. In fact, there are usually concise descriptions for entities, which cannot be well employed by these existing models. For instance, a knowledge embedding model based on convolutional networks (ConvKB [9]), has shown remarkable results in the knowledge link prediction, which have not fully utilized the complementary texts of entities. Therefore, we propose a text-augmented embedding model based on ConvKB, which firstly uses bidirectional short and long term memory network with attention (A-BiLSTM) to encode the descriptions of the entities, then combines the structure of the symbol triples embeddings and text embeddings with novel gate mechanism (in the form of the LSTM gates). In this way, structural representations and textual representations can all be learned. The experiments have shown that our method is superior to the previous ConvKB in tasks like link prediction.

Cite

CITATION STYLE

APA

Liu, C., Zhang, Y., Yu, M., Yu, R., Li, X., Zhao, M., … Yu, J. (2019). Text-augmented knowledge representation learning based on convolutional network. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11953 LNCS, pp. 187–198). Springer. https://doi.org/10.1007/978-3-030-36708-4_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free