Robust knowledge graph completion with stacked convolutions and a student re-ranking network

20Citations
Citations of this article
90Readers
Mendeley users who have this article in their library.

Abstract

Knowledge Graph (KG) completion research usually focuses on densely connected benchmark datasets that are not representative of real KGs. We curate two KG datasets that include biomedical and encyclopedic knowledge and use an existing commonsense KG dataset to explore KG completion in the more realistic setting where dense connectivity is not guaranteed. We develop a deep convolutional network that utilizes textual entity representations and demonstrate that our model outperforms recent KG completion methods in this challenging setting. We find that our model's performance improvements stem primarily from its robustness to sparsity. We then distill the knowledge from the convolutional network into a student network that re-ranks promising candidate entities. This re-ranking stage leads to further improvements in performance and demonstrates the effectiveness of entity re-ranking for KG completion.

Cite

CITATION STYLE

APA

Lovelace, J., Newman-Griffis, D., Vashishth, S., Lehman, J. F., & Rosé, C. P. (2021). Robust knowledge graph completion with stacked convolutions and a student re-ranking network. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 1016–1029). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.82

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free