SEEN: Seen: Sharpening Explanations for Graph Neural Networks Using Explanations From Neighborhoods

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Explaining the foundations for predictions obtained from graph neural networks (GNNs) is critical for credible use of GNN models for real-world problems. Owing to the rapid growth of GNN applications, recent progress in explaining predictions from GNNs, such as sensitiv-ity analysis, perturbation methods, and attribution methods, showed great opportunities and possibilities for explaining GNN predictions. In this study, we propose a method to improve the explanation quality of node classification tasks that can be applied in a post hoc manner through aggregation of auxiliary explanations from important neighboring nodes, named SEEN. Applying SEEN does not require modification of a graph and can be used with diverse explainability techniques due to its independent mechanism. Experiments on matching motif-participating nodes from a given graph show great improvement in explanation accuracy of up to 12.71% and demonstrate the correlation between the auxiliary explanations and the enhanced explanation accuracy through leveraging their contributions. SEEN provides a simple but effective method to enhance the explanation quality of GNN model outputs, and this method is applicable in combination with most explainability techniques.

Cite

CITATION STYLE

APA

Cho, H., Oh, Y., & Jeon, E. (2023). SEEN: Seen: Sharpening Explanations for Graph Neural Networks Using Explanations From Neighborhoods. Advances in Artificial Intelligence and Machine Learning, 3(2), 1165–1179. https://doi.org/10.54364/AAIML.2023.1168

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free