Descriptive Accuracy in Explanations: The Case of Probabilistic Classifiers

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

A user receiving an explanation for outcomes produced by an artificially intelligent system expects that it satisfies the key property of descriptive accuracy (DA), i.e. that the explanation contents are in correspondence with the internal working of the system. Crucial as this property appears to be, it has been somehow overlooked in the XAI literature to date. To address this problem, we consider the questions of formalising DA and of analysing its satisfaction by explanation methods. We provide formal definitions of naive, structural and dialectical DA, using the family of probabilistic classifiers as the context for our analysis. We evaluate the satisfaction of our given notions of DA by several explanation methods, amounting to two popular feature-attribution methods from the literature and a novel form of explanation that we propose and complement our analysis with experiments carried out on a varied selection of concrete probabilistic classifiers.

Cite

CITATION STYLE

APA

Albini, E., Rago, A., Baroni, P., & Toni, F. (2022). Descriptive Accuracy in Explanations: The Case of Probabilistic Classifiers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13562 LNAI, pp. 279–294). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-18843-5_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free