Reasoning like human: Hierarchical reinforcement learning for knowledge graph reasoning

75Citations
Citations of this article
127Readers
Mendeley users who have this article in their library.

Abstract

Knowledge Graphs typically suffer from incompleteness. A popular approach to knowledge graph completion is to infer missing knowledge by multi-hop reasoning over the information found along other paths connecting a pair of entities. However, multi-hop reasoning is still challenging because the reasoning process usually experiences multiple semantic issue that a relation or an entity has multiple meanings. In order to deal with the situation, we propose a novel Hierarchical Reinforcement Learning framework to learn chains of reasoning from a Knowledge Graph automatically. Our framework is inspired by the hierarchical structure through which a human being handles cognitionally ambiguous cases. The whole reasoning process is decomposed into a hierarchy of two-level Reinforcement Learning policies for encoding historical information and learning structured action space. As a consequence, it is more feasible and natural for dealing with the multiple semantic issue. Experimental results show that our proposed model achieves substantial improvements in ambiguous relation tasks.

Cite

CITATION STYLE

APA

Wan, G., Pan, S., Gong, C., Zhou, C., & Haffari, G. (2020). Reasoning like human: Hierarchical reinforcement learning for knowledge graph reasoning. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 1926–1932). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/267

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free