Explainable Feature Extraction and Prediction Framework for 3D Image Recognition Applied to Pneumonia Detection

10Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Explainable machine learning is an emerging new domain fundamental for trustworthy real-world applications. A lack of trust and understanding are the main drawbacks of deep learning models when applied to real-world decision systems and prediction tasks. Such models are considered as black boxes because they are unable to explain the reasons for their predictions in human terms; thus, they cannot be universally trusted. In critical real-world applications, such as in medical, legal, and financial ones, an explanation of machine learning (ML) model decisions is considered crucially significant and mandatory in order to acquire trust and avoid fatal ML bugs, which could disturb human safety, rights, and health. Nevertheless, explainable models are more than often less accurate; thus, it is essential to invent new methodologies for creating interpretable predictors that are almost as accurate as black-box ones. In this work, we propose a novel explainable feature extraction and prediction framework applied to 3D image recognition. In particular, we propose a new set of explainable features based on mathematical and geometric concepts, such as lines, vertices, contours, and the area size of objects. These features are calculated based on the extracted contours of every 3D input image slice. In order to validate the efficiency of the proposed approach, we apply it to a critical real-world application: pneumonia detection based on CT 3D images. In our experimental results, the proposed white-box prediction framework manages to achieve a performance similar to or marginally better than state-of-the-art 3D-CNN black-box models. Considering the fact that the proposed approach is explainable, such a performance is particularly significant.

References Powered by Scopus

Deep residual learning for image recognition

177712Citations
N/AReaders
Get full text

Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization

15646Citations
N/AReaders
Get full text

Quo Vadis, action recognition? A new model and the kinetics dataset

6713Citations
N/AReaders
Get full text

Cited by Powered by Scopus

XSC—An eXplainable Image Segmentation and Classification Framework: A Case Study on Skin Cancer

7Citations
N/AReaders
Get full text

Trustworthy AI Guidelines in Biomedical Decision-Making Applications: A Scoping Review

3Citations
N/AReaders
Get full text

Deep neural networks for explainable feature extraction in orchid identification

3Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Pintelas, E., Livieris, I. E., & Pintelas, P. (2023). Explainable Feature Extraction and Prediction Framework for 3D Image Recognition Applied to Pneumonia Detection. Electronics (Switzerland), 12(12). https://doi.org/10.3390/electronics12122663

Readers' Seniority

Tooltip

Professor / Associate Prof. 1

25%

Lecturer / Post doc 1

25%

PhD / Post grad / Masters / Doc 1

25%

Researcher 1

25%

Readers' Discipline

Tooltip

Computer Science 2

50%

Engineering 1

25%

Earth and Planetary Sciences 1

25%

Article Metrics

Tooltip
Mentions
Blog Mentions: 1
News Mentions: 1

Save time finding and organizing research with Mendeley

Sign up for free