The very active community for interpretable machine learning can learn from the rich 50+ year history of explainable AI. We here give two specific examples from this legacy that could enrich current interpretability work: First, Explanation desiderata were we point to the rich set of ideas developed in the ‘explainable expert systems’ field and, second, tools for quantification of uncertainty of high-dimensional feature importance maps which have been developed in the field of computational neuroimaging.
CITATION STYLE
Hansen, L. K., & Rieger, L. (2019). Interpretability in Intelligent Systems – A New Concept? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11700 LNCS, pp. 41–49). Springer Verlag. https://doi.org/10.1007/978-3-030-28954-6_3
Mendeley helps you to discover research relevant for your work.