Higher-Order Explanations of Graph Neural Networks via Relevant Walks

116Citations
Citations of this article
174Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data. As GNNs tightly entangle the input graph into the neural network structure, common explainable AI approaches are not applicable. To a large extent, GNNs have remained black-boxes for the user so far. In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions, i.e., by identifying groups of edges that jointly contribute to the prediction. Practically, we find that such explanations can be extracted using a nested attribution scheme, where existing techniques such as layer-wise relevance propagation (LRP) can be applied at each step. The output is a collection of walks into the input graph that are relevant for the prediction. Our novel explanation method, which we denote by GNN-LRP, is applicable to a broad range of graph neural networks and lets us extract practically relevant insights on sentiment analysis of text data, structure-property relationships in quantum chemistry, and image classification.

Cite

CITATION STYLE

APA

Schnake, T., Eberle, O., Lederer, J., Nakajima, S., Schutt, K. T., Muller, K. R., & Montavon, G. (2022). Higher-Order Explanations of Graph Neural Networks via Relevant Walks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11), 7581–7596. https://doi.org/10.1109/TPAMI.2021.3115452

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free