Interpretability in Intelligent Systems – A New Concept?

17Citations
Citations of this article
47Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The very active community for interpretable machine learning can learn from the rich 50+ year history of explainable AI. We here give two specific examples from this legacy that could enrich current interpretability work: First, Explanation desiderata were we point to the rich set of ideas developed in the ‘explainable expert systems’ field and, second, tools for quantification of uncertainty of high-dimensional feature importance maps which have been developed in the field of computational neuroimaging.

Cite

CITATION STYLE

APA

Hansen, L. K., & Rieger, L. (2019). Interpretability in Intelligent Systems – A New Concept? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11700 LNCS, pp. 41–49). Springer Verlag. https://doi.org/10.1007/978-3-030-28954-6_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free