An Explainable Brain Tumor Detection Framework for MRI Analysis

14Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.

Abstract

Explainability in medical images analysis plays an important role in the accurate diagnosis and treatment of tumors, which can help medical professionals better understand the images analysis results based on deep models. This paper proposes an explainable brain tumor detection framework that can complete the tasks of segmentation, classification, and explainability. The re-parameterization method is applied to our classification network, and the effect of explainable heatmaps is improved by modifying the network architecture. Our classification model also has the advantage of post-hoc explainability. We used the BraTS-2018 dataset for training and verification. Experimental results show that our simplified framework has excellent performance and high calculation speed. The comparison of results by segmentation and explainable neural networks helps researchers better understand the process of the black box method, increase the trust of the deep model output, and make more accurate judgments in disease identification and diagnosis.

Cite

CITATION STYLE

APA

Yan, F., Chen, Y., Xia, Y., Wang, Z., & Xiao, R. (2023). An Explainable Brain Tumor Detection Framework for MRI Analysis. Applied Sciences (Switzerland), 13(6). https://doi.org/10.3390/app13063438

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free