Interpretable Medical Image Classification Using Prototype Learning and Privileged Information

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Interpretability is often an essential requirement in medical imaging. Advanced deep learning methods are required to address this need for explainability and high performance. In this work, we investigate whether additional information available during the training process can be used to create an understandable and powerful model. We propose an innovative solution called Proto-Caps that leverages the benefits of capsule networks, prototype learning and the use of privileged information. Evaluating the proposed solution on the LIDC-IDRI dataset shows that it combines increased interpretability with above state-of-the-art prediction performance. Compared to the explainable baseline model, our method achieves more than 6 % higher accuracy in predicting both malignancy (93.0 % ) and mean characteristic features of lung nodules. Simultaneously, the model provides case-based reasoning with prototype representations that allow visual validation of radiologist-defined attributes.

Cite

CITATION STYLE

APA

Gallée, L., Beer, M., & Götz, M. (2023). Interpretable Medical Image Classification Using Prototype Learning and Privileged Information. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14221 LNCS, pp. 435–445). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-43895-0_41

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free