Influence-Driven Explanations for Bayesian Network Classifiers

7Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a novel approach to building influence-driven explanations (IDXs) for (discrete) Bayesian network classifiers (BCs). IDXs feature two main advantages wrt other commonly adopted explanation methods First, IDXs may be generated using the (causal) influences between intermediate, in addition to merely input and output, variables within BCs, thus providing a deep, rather than shallow, account of the BCs’ behaviour. Second, IDXs are generated according to a configurable set of properties, specifying which influences between variables count towards explanations. Our approach is thus flexible and can be tailored to the requirements of particular contexts or users. Leveraging on this flexibility, we propose novel IDX instances as well as IDX instances capturing existing approaches. We demonstrate IDXs’ capability to explain various forms of BCs, and assess the advantages of our proposed IDX instances with both theoretical and empirical analyses.

Cite

CITATION STYLE

APA

Albini, E., Rago, A., Baroni, P., & Toni, F. (2021). Influence-Driven Explanations for Bayesian Network Classifiers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13031 LNAI, pp. 88–100). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-89188-6_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free