Explaining any black box model using real data

3Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

In recent years the use of complex machine learning has increased drastically. These complex black box models trade interpretability for accuracy. The lack of interpretability is troubling for, e.g., socially sensitive, safety-critical, or knowledge extraction applications. In this paper, we propose a new explanation method, SLISE, for interpreting predictions from black box models. SLISE can be used with any black box model (model-agnostic), does not require any modifications to the black box model (post-hoc), and explains individual predictions (local). We evaluate our method using real-world datasets and compare it against other model-agnostic, local explanation methods. Our approach solves shortcomings in other related explanation methods by only using existing data instead of sampling new, artificial data. The method also generates more generalizable explanations and is usable without modification across various data domains.

Cite

CITATION STYLE

APA

Björklund, A., Henelius, A., Oikarinen, E., Kallonen, K., & Puolamäki, K. (2023). Explaining any black box model using real data. Frontiers in Computer Science, 5. https://doi.org/10.3389/fcomp.2023.1143904

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free