A Multimodal Translation-Based Approach for Knowledge Graph Representation Learning

166Citations
Citations of this article
162Readers
Mendeley users who have this article in their library.

Abstract

Current methods for knowledge graph (KG) representation learning focus solely on the structure of the KG and do not exploit any kind of external information, such as visual and linguistic information corresponding to the KG entities. In this paper, we propose a multimodal translation-based approach that defines the energy of a KG triple as the sum of sub-energy functions that leverage both multimodal (visual and linguistic) and structural KG representations. Next, a ranking-based loss is minimized using a simple neural network architecture. Moreover, we introduce a new large-scale dataset for multimodal KG representation learning. We compared the performance of our approach to other baselines on two standard tasks, namely knowledge graph completion and triple classification, using our as well as the WN9-IMG dataset.1 The results demonstrate that our approach outperforms all baselines on both tasks and datasets.

Cite

CITATION STYLE

APA

Mousselly-Sergieh, H., Botschen, T., Gurevych, I., & Roth, S. (2018). A Multimodal Translation-Based Approach for Knowledge Graph Representation Learning. In NAACL HLT 2018 - Lexical and Computational Semantics, SEM 2018, Proceedings of the 7th Conference (pp. 225–234). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/S18-2027

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free