Towards complementary explanations using deep neural networks

31Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Interpretability is a fundamental property for the acceptance of machine learning models in highly regulated areas. Recently, deep neural networks gained the attention of the scientific community due to their high accuracy in vast classification problems. However, they are still seen as black-box models where it is hard to understand the reasons for the labels that they generate. This paper proposes a deep model with monotonic constraints that generates complementary explanations for its decisions both in terms of style and depth. Furthermore, an objective framework for the evaluation of the explanations is presented. Our method is tested on two biomedical datasets and demonstrates an improvement in relation to traditional models in terms of quality of the explanations generated.

Cite

CITATION STYLE

APA

Silva, W., Fernandes, K., Cardoso, M. J., & Cardoso, J. S. (2018). Towards complementary explanations using deep neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11038 LNCS, pp. 133–140). Springer Verlag. https://doi.org/10.1007/978-3-030-02628-8_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free