Knowledge graphs have been playing an important role in many Artificial Intelligence (AI) applications such as entity linking, question answering and so forth. However, most of previous studies focused on the symbolic representation of knowledge graphs with structural information, which cannot deal well with new entities or rare entities with little relevant knowledge. In this paper, we propose a new deep knowledge representation architecture that jointly encodes both structure and textual information. We first propose a novel neural model to encode the text descriptions of entities based on Convolutional Neural Networks (CNN). Secondly, an attention mechanism is applied to capture the valuable information from these descriptions. Then we introduce position vectors as supplementary information. Finally, a gate mechanism is designed to integrate representations of structure and text into the joint representation. Experimental results on two datasets show that our models obtain state-of-the-art results on link prediction and triplet classification tasks, and achieve the best performance on the relation classification task.
CITATION STYLE
Gao, W., Fang, Y., Zhang, F., & Yang, Z. (2020). Representation learning of knowledge graphs using convolutional neural networks. Neural Network World, 30(3), 145–160. https://doi.org/10.14311/NNW.2020.30.011
Mendeley helps you to discover research relevant for your work.