Providing Post-Hoc Explanation for Node Representation Learning Models Through Inductive Conformal Predictions

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Learning with graph-structured data, such as social, biological, and financial networks, requires effective low-dimensional representations to handle their large and complex interactions. Recently, with the advances of neural networks and embedding algorithms, many unsupervised approaches have been proposed for many downstream tasks with promising results; however, there has been limited research on interpreting the unsupervised representations and, specifically, on understanding which parts of the neighboring nodes contribute to the representation of a node. To mitigate this problem, we propose a statistical framework to interpret the learned representations. Many of the existing works, which are designed for supervised node presentation models, compute the difference in prediction scores after perturbing the edges of a candidate explanation node; however, our proposed framework leverages a conformal prediction (CP)-based statistical test to verify the importance of the candidate node in each node representation. In our evaluation, our proposed framework was verified in many experimental settings and presented promising results compared to those of the recent baseline methods.

Cite

CITATION STYLE

APA

Park, H. (2023). Providing Post-Hoc Explanation for Node Representation Learning Models Through Inductive Conformal Predictions. IEEE Access, 11, 1202–1212. https://doi.org/10.1109/ACCESS.2022.3233036

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free