Multi-hop reasoning is an effective and explainable approach to predicting missing facts in Knowledge Graphs (KGs). It usually adopts the Reinforcement Learning (RL) framework and searches over the KG to find an evidential path. However, due to the large exploration space, the RL-based model struggles with the serious sparse reward problem and needs to make a lot of trials. Moreover, its exploration can be biased towards spurious paths that coincidentally lead to correct answers. To solve both problems, we propose a simple but effective RL-based method called RARL (Rule-Aware RL). It injects high quality symbolic rules into the model's reasoning process and employs partially random beam search, which can not only increase the probability of paths getting rewards, but also alleviate the impact of spurious paths. Experimental results show that it outperforms existing multi-hop methods in terms of Hit@1 and MRR.
CITATION STYLE
Hou, Z., Jin, X., Li, Z., & Bai, L. (2021). Rule-Aware Reinforcement Learning for Knowledge Graph Reasoning. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 4687–4692). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.412
Mendeley helps you to discover research relevant for your work.