Graph Hawkes Transformer for Extrapolated Reasoning on Temporal Knowledge Graphs

51Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Temporal Knowledge Graph (TKG) reasoning has attracted increasing attention due to its enormous potential value, and the critical issue is how to model the complex temporal structure information effectively. Recent studies use the method of encoding graph snapshots into hidden vector space and then performing heuristic deductions, which perform well on the task of entity prediction. However, these approaches cannot predict when an event will occur, and have the following limitations: 1) there are many facts not related to the query that can confuse the model; 2) there exists information forgetting caused by long-term evolutionary processes. To this end, we propose a Graph Hawkes Transformer (GHT) for both TKG entity prediction and time prediction tasks in the future time. In GHT, there are two variants of Transformer, which capture the instantaneous structural information and temporal evolution information, respectively, and a new relational continuous-time encoding function to facilitate feature evolution with the Hawkes process. Extensive experiments on four public datasets demonstrate its superior performance, especially on long-term evolutionary tasks.

Cite

CITATION STYLE

APA

Sun, H., Geng, S., Zhong, J., Hu, H., & He, K. (2022). Graph Hawkes Transformer for Extrapolated Reasoning on Temporal Knowledge Graphs. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 7481–7493). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.507

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free