The Coming of Age of Interpretable and Explainable Machine Learning Models

11Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Machine learning-based systems are now part of a wide array of real-world applications seamlessly embedded in the social realm. In the wake of this realisation, strict legal regulations for these systems are currently being developed, addressing some of the risks they may pose. This is the coming of age of the interpretability and explainability problems in machine learning-based data analysis, which can no longer be seen just as an academic research problem. In this tutorial, associated to ESANN 2021 special session on “Interpretable Models in Machine Learning and Explainable Artificial Intelligence”, we discuss explainable and interpretable machine learning as post-hoc and ante-hoc strategies to address these problems and highlight several aspects related to them, including their assessment. The contributions accepted for the session are then presented in this context.

Cite

CITATION STYLE

APA

Lisboa, P. J. G., Saralajew, S., Vellido, A., & Villmann, T. (2021). The Coming of Age of Interpretable and Explainable Machine Learning Models. In ESANN 2021 Proceedings - 29th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (pp. 547–556). i6doc.com publication. https://doi.org/10.14428/esann/2021.ES2021-2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free