Calibrate to Interpret

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Trustworthy Machine learning (ML) is driving a large number of ML community works in order to improve ML acceptance and adoption. The main aspect of trustworthy ML are the followings: fairness, uncertainty, robustness, explainability and formal guaranties. Each of these individual domains gains the ML community interest, visible by the number of related publications. However few works tackle the interconnection between these fields. In this paper we show a first link between uncertainty and explainability, by studying the relation between calibration and interpretation. As the calibration of a given model changes the way it scores samples, and interpretation approaches often rely on these scores, it seems safe to assume that the confidence-calibration of a model interacts with our ability to interpret such model. In this paper, we show, in the context of networks trained on image classification tasks, to what extent interpretations are sensitive to confidence-calibration. It leads us to suggest a simple practice to improve the interpretation outcomes: Calibrate to Interpret.

Cite

CITATION STYLE

APA

Scafarto, G., Posocco, N., & Bonnefoy, A. (2023). Calibrate to Interpret. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13713 LNAI, pp. 340–355). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-26387-3_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free