Quantified Explainability: Convolutional Neural Network Focus Assessment in Arrhythmia Detection

6Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

In clinical practice, every decision should be reliable and explained to the stakeholders. The high accuracy of deep learning (DL) models pose a great advantage, but the fact that they function as black-boxes hinders their clinical applications. Hence, explainability methods became important as they provide explanation to DL models. In this study, two datasets with electrocardiogram (ECG) image representations of six heartbeats were built, one given the label of the last heartbeat and the other given the label of the first heartbeat. Each dataset was used to train one neural network. Finally, we applied well-known explainability methods to the resulting networks to explain their classifications. Explainability methods produced attribution maps where pixels intensities are proportional to their importance to the classification task. Then, we developed a metric to quantify the focus of the models in the heartbeat of interest. The classification models achieved testing accuracy scores of around 93.66% and 91.72%. The models focused around the heartbeat of interest, with values of the focus metric ranging between 8.8% and 32.4%. Future work will investigate the importance of regions outside the region of interest, besides the contribution of specific ECG waves to the classification.

Cite

CITATION STYLE

APA

Varandas, R., Gonçalves, B., Gamboa, H., & Vieira, P. (2022). Quantified Explainability: Convolutional Neural Network Focus Assessment in Arrhythmia Detection. BioMedInformatics, 2(1), 124–138. https://doi.org/10.3390/biomedinformatics2010008

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free