Explaining Black-Box Classifiers with ILP – Empowering LIME with Aleph to Approximate Non-linear Decisions with Relational Rules

23Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose an adaption of the explanation-generating system LIME. While LIME relies on sparse linear models, we explore how richer explanations can be generated. As application domain we use images which consist of a coarse representation of ancient graves. The graves are divided into two classes and can be characterised by meaningful features and relations. This domain was generated in analogy to a classic concept acquisition domain researched in psychology. Like LIME, our approach draws samples around a simplified representation of the instance to be explained. The samples are labelled by a generator – simulating a black-box classifier trained on the images. In contrast to LIME, we feed this information to the ILP system Aleph. We customised Aleph’s evaluation function to take into account the similarity of instances. We applied our approach to generate explanations for different variants of the ancient graves domain. We show that our explanations can involve richer knowledge thereby going beyond the expressiveness of sparse linear models.

Cite

CITATION STYLE

APA

Rabold, J., Siebers, M., & Schmid, U. (2018). Explaining Black-Box Classifiers with ILP – Empowering LIME with Aleph to Approximate Non-linear Decisions with Relational Rules. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11105 LNAI, pp. 105–117). Springer Verlag. https://doi.org/10.1007/978-3-319-99960-9_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free