Knowledge graph (KG) is the most popular method for presenting knowledge in search engines and other natural-language processing (NLP) applications. However, KG remains incomplete, inconsistent, and not completely accurate. To deal with the challenges of KGs, many state-of-the-art models, such as TransE, TransH, and TransR, have been proposed. TransE and TransH use one semantic space for entities and relations, whereas TransR uses two different semantic spaces in its embedding model. An issue is that these proposed models ignore the category-specific projection of entities. For example, the entity “Washington” could belong to the person or location category depending on its context or relationships. An entity may therefore involve multiple types or aspects. Considering all entities in just one semantic space is therefore not a logical approach to building an effective model. In this paper, we propose TransET, which maps each entity based on its type. We can then apply any other existing translation-distance-based embedding models such as TransE or TransR. We evaluated our model using two tasks that involve link prediction and triple classification. Our model achieved a significant and consistent improvement over other state-of-the-art models.
CITATION STYLE
Rahman, M. M., & Takasu, A. (2018). Knowledge graph embedding via entities’ type mapping matrix. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11303 LNCS, pp. 114–125). Springer Verlag. https://doi.org/10.1007/978-3-030-04182-3_11
Mendeley helps you to discover research relevant for your work.