Explainable Artificial Intelligence and Cardiac Imaging: Toward More Interpretable Models

71Citations
Citations of this article
92Readers
Mendeley users who have this article in their library.

Abstract

Artificial intelligence applications have shown success in different medical and health care domains, and cardiac imaging is no exception. However, some machine learning models, especially deep learning, are considered black box as they do not provide an explanation or rationale for model outcomes. Complexity and vagueness in these models necessitate a transition to explainable artificial intelligence (XAI) methods to ensure that model results are both transparent and understandable to end users. In cardiac imaging studies, there are a limited number of papers that use XAI methodologies. This article provides a comprehensive literature review of state-of-the-art works using XAI methods for cardiac imaging. Moreover, it provides simple and comprehensive guidelines on XAI. Finally, open issues and directions for XAI in cardiac imaging are discussed.

Cite

CITATION STYLE

APA

Salih, A., Boscolo Galazzo, I., Gkontra, P., Lee, A. M., Lekadir, K., Raisi-Estabragh, Z., & Petersen, S. E. (2023). Explainable Artificial Intelligence and Cardiac Imaging: Toward More Interpretable Models. Circulation: Cardiovascular Imaging, 16(4), E014519. https://doi.org/10.1161/CIRCIMAGING.122.014519

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free