VCNet: A Self-explaining Model for Realistic Counterfactual Generation

3Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Counterfactual explanation is a common class of methods to make local explanations of machine learning decisions. For a given instance, these methods aim to find the smallest modification of feature values that changes the predicted decision made by a machine learning model. One of the challenges of counterfactual explanation is the efficient generation of realistic counterfactuals. To address this challenge, we propose VCNet – Variational Counter Net – a model architecture that combines a predictor and a counterfactual generator that are jointly trained, for regression or classification tasks. VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem. Our contribution is the generation of counterfactuals that are close to the distribution of the predicted class. This is done by learning a variational autoencoder conditionally to the output of the predictor in a join-training fashion. We present an empirical evaluation on tabular datasets and across several interpretability metrics. The results are competitive with the state-of-the-art method.

Cite

CITATION STYLE

APA

Guyomard, V., Fessant, F., Guyet, T., Bouadi, T., & Termier, A. (2023). VCNet: A Self-explaining Model for Realistic Counterfactual Generation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13713 LNAI, pp. 437–453). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-26387-3_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free