Interpretable Deep Learning for Neuroimaging-Based Diagnostic Classification

0Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Deep neural networks (DNN) are increasingly being used in neuroimaging research for the diagnosis of brain disorders and understanding of human brain. Despite their impressive performance, their usage in medical applications will be limited unless there is more transparency on how these algorithms arrive at their decisions. We address this issue in the current report. A DNN classifier was trained to discriminate between healthy subjects and those with posttraumatic stress disorder (PTSD) using brain connectivity obtained from functional magnetic resonance imaging data. The classifier provided 90% accuracy. Brain connectivity features important for classification were generated for a pool of test subjects and permutation testing was used to identify significantly discriminative connections. Such heatmaps of significant paths were generated from 10 different interpretability algorithms based on variants of layer-wise relevance and gradient attribution methods. Since different interpretability algorithms make different assumptions about the data and model, their explanations had both commonalities and differences. Therefore, we developed a consensus across interpretability methods, which aligned well with the existing knowledge about brain alterations underlying PTSD. The confident identification of more than 20 regions, acknowledged for their relevance to PTSD in prior studies, was achieved with a voting score exceeding 8 and a family-wise correction threshold below 0.05. Our work illustrates how robustness and physiological plausibility of explanations can be achieved in interpreting classifications obtained from DNNs in diagnostic neuroimaging applications by evaluating convergence across methods. This will be crucial for trust in AI-based medical diagnostics in the future.

Cite

CITATION STYLE

APA

Deshpande, G., Masood, J., Huynh, N., Denney, T. S., & Dretsch, M. N. (2024). Interpretable Deep Learning for Neuroimaging-Based Diagnostic Classification. IEEE Access, 12, 55474–55490. https://doi.org/10.1109/ACCESS.2024.3388911

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free