GRETEL: Graph Counterfactual Explanation Evaluation Framework

14Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Machine Learning (ML) systems are a building part of the modern tools which impact our daily life in several application domains. Due to their black-box nature, those systems are hardly adopted in application domains (e.g. health, finance) where understanding the decision process is of paramount importance. Explanation methods were developed to explain how the ML model has taken a specific decision for a given case/instance. Graph Counterfactual Explanations (GCE) is one of the explanation techniques adopted in the Graph Learning domain. The existing works on Graph Counterfactual Explanations diverge mostly in the problem definition, application domain, test data, and evaluation metrics, and most existing works do not compare exhaustively against other counterfactual explanation techniques present in the literature. We present GRETEL, a unified framework to develop and test GCE methods in several settings. GRETEL is a highly extensible evaluation framework which promotes Open Science and the reproducibility of the evaluation by providing a set of well-defined mechanisms to integrate and manage easily: both real and synthetic datasets, ML models, state-of-the-art explanation techniques, and evaluation measures. Lastly, we also show the experiments conducted to integrate and test several existing scenarios (datasets, measures, explainers).

Cite

CITATION STYLE

APA

Prado-Romero, M. A., & Stilo, G. (2022). GRETEL: Graph Counterfactual Explanation Evaluation Framework. In International Conference on Information and Knowledge Management, Proceedings (pp. 4389–4393). Association for Computing Machinery. https://doi.org/10.1145/3511808.3557608

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free