A Study on Trust in Black Box Models and Post-hoc Explanations

10Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Machine learning algorithms that construct complex prediction models are increasingly used for decision-making due to their high accuracy, e.g., to decide whether a bank customer should receive a loan or not. Due to the complexity, the models are perceived as black boxes. One approach is to augment the models with post-hoc explainability. In this work, we evaluate three different explanation approaches based on the users’ initial trust, the users’ trust in the provided explanation, and the established trust in the black box by a within-subject design study.

Cite

CITATION STYLE

APA

El Bekri, N., Kling, J., & Huber, M. F. (2020). A Study on Trust in Black Box Models and Post-hoc Explanations. In Advances in Intelligent Systems and Computing (Vol. 950, pp. 35–46). Springer Verlag. https://doi.org/10.1007/978-3-030-20055-8_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free