The cost of explainability in artificial intelligence-enhanced electrocardiogram models

1Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Artificial intelligence-enhanced electrocardiogram (AI-ECG) models have shown outstanding performance in diagnostic and prognostic tasks, yet their black-box nature hampers clinical adoption. Meanwhile, a growing demand for explainable AI in medicine underscores the need for transparent, trustworthy decision-making. Moving beyond post-hoc explainability techniques that have shown unreliable results, we focus on explicit representation learning using variational autoencoders (VAE) to capture inherently interpretable ECG features. While VAEs have demonstrated potential for ECG interpretability, the presumed performance-explainability trade-off remains underexplored, with many studies relying on complex, non-linear methods that obscure the morphological information of the features. In this work, we present a novel framework (VAE-SCAN) to model bi-directional, interpretable associations between ECG features and clinical factors. We also investigate how different representations affect ECG decoding performance across models with varying levels of explainability. Our findings demonstrate the cost introduced by intrinsic ECG interpretability, based on which we discuss potential implications and directions.

Cite

CITATION STYLE

APA

Patlatzoglou, K., Pastika, L., Barker, J., Sieliwonczyk, E., Khattak, G. R., Zeidaabadi, B., … Ng, F. S. (2025). The cost of explainability in artificial intelligence-enhanced electrocardiogram models. Npj Digital Medicine, 8(1). https://doi.org/10.1038/s41746-025-02122-y

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free