Neural link predictors are useful for identifying missing edges in large scale Knowledge Graphs. However, it is still not clear how to use these models for answering more complex queries containing logical conjunctions (∧), disjunctions (∨), and existential quantifiers (∃). We propose a framework for efficiently answering complex queries on incomplete Knowledge Graphs. We translate each query into an end-to-end differentiable objective, where the truth value of each atom is computed by a pre-trained neural link predictor. We then analyse two solutions to the optimisation problem, including gradient-based and combinatorial search. In our experiments, the proposed approach produces more accurate results than state-of-the-art methods - black-box models trained on millions of generated queries - without the need for training on a large and diverse set of complex queries. Using orders of magnitude less training data, we obtain relative improvements ranging from 8% up to 40% in Hits@3 across multiple knowledge graphs. We find that it is possible to explain the outcome of our model in terms of the intermediate solutions identified for each of the complex query atoms. All our source code and datasets are available online.
CITATION STYLE
Minervini, P., Arakelyan, E., Daza, D., & Cochez, M. (2022). Complex Query Answering with Neural Link Predictors (Extended Abstract). In IJCAI International Joint Conference on Artificial Intelligence (pp. 5309–5313). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/741
Mendeley helps you to discover research relevant for your work.