Concept2Box: Joint Geometric Embeddings for Learning Two-View Knowledge Graphs

4Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Knowledge graph embeddings (KGE) have been extensively studied to embed large-scale relational data for many real-world applications. Existing methods have long ignored the fact many KGs contain two fundamentally different views: high-level ontology-view concepts and fine-grained instance-view entities. They usually embed all nodes as vectors in one latent space. However, a single geometric representation fails to capture the structural differences between two views and lacks probabilistic semantics towards concepts' granularity. We propose Concept2Box, a novel approach that jointly embeds the two views of a KG using dual geometric representations. We model concepts with box embeddings, which learn the hierarchy structure and complex relations such as overlap and disjoint among them. Box volumes can be interpreted as concepts' granularity. Different from concepts, we model entities as vectors. To bridge the gap between concept box embeddings and entity vector embeddings, we propose a novel vector-to-box distance metric and learn both embeddings jointly. Experiments on both the public DBpedia KG and a newly-created industrial KG showed the effectiveness of Concept2Box.

Cite

CITATION STYLE

APA

Huang, Z., Wang, D., Huang, B., Zhang, C., Shang, J., Liang, Y., … Wang, W. (2023). Concept2Box: Joint Geometric Embeddings for Learning Two-View Knowledge Graphs. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 10105–10118). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.642

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free