Learning Latent Relations for Temporal Knowledge Graph Reasoning

45Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Temporal Knowledge Graph (TKG) reasoning aims to predict future facts based on historical data. However, due to the limitations in construction tools and data sources, many important associations between entities may be omitted in TKG. We refer to these missing associations as latent relations. Most of the existing methods have some drawbacks in explicitly capturing intra-time latent relations between co-occurring entities and inter-time latent relations between entities that appear at different times. To tackle these problems, we propose a novel Latent relations Learning method for TKG reasoning, namely L2TKG. Specifically, we first utilize a Structural Encoder (SE) to obtain representations of entities at each timestamp. We then design a Latent Relations Learning (LRL) module to mine and exploit the intra- and inter-time latent relations. Finally, we extract the temporal representations from the output of SE and LRL for entity prediction. Extensive experiments on four datasets demonstrate the effectiveness of L2TKG.

Cite

CITATION STYLE

APA

Zhang, M., Xia, Y., Liu, Q., Wu, S., & Wang, L. (2023). Learning Latent Relations for Temporal Knowledge Graph Reasoning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 12617–12631). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.705

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free