Ensembling to Leverage the Interpretability of Medical Image Analysis Systems

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Along with the increase in the accuracy of artificial intelligence systems, complexity has also risen. Despite high accuracy, high-risk decision-making requires explanations about the model's decision, which often take the form of saliency maps. This work examines the efficacy of ensembling deep convolutional neural networks to leverage explanations, under the concept that ensemble models are combinatory informed. A novel approach is presented for aggregating saliency maps derived from multiple base models, as an alternative way of combining the different perspectives that several competent models offer. The proposed methodology lowers computation costs, while allowing for the combinations of maps of various origins. Following a saliency map evaluation scheme, four tests are performed over three image datasets, two medical image datasets and one generic. The results suggest that interpretability is improved by combining information through the aggregation scheme. The discussion that follows provides insights into the inner workings behind the results, such as the specific combination of the interpretability and ensemble methods, and offers useful suggestions for future work.

Cite

CITATION STYLE

APA

Zafeiriou, A., Kallipolitis, A., & Maglogiannis, I. (2023). Ensembling to Leverage the Interpretability of Medical Image Analysis Systems. IEEE Access, 11, 76437–76447. https://doi.org/10.1109/ACCESS.2023.3291610

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free