Explainable artificial intelligence for bias detection in covid ct-scan classifiers

33Citations
Citations of this article
78Readers
Mendeley users who have this article in their library.

Abstract

Problem: An application of Explainable Artificial Intelligence Methods for COVID CT-Scan classifiers is presented. Motivation: It is possible that classifiers are using spurious artifacts in dataset images to achieve high performances, and such explainable techniques can help identify this issue. Aim: For this purpose, several approaches were used in tandem, in order to create a complete overview of the classificatios. Methodology: The techniques used included GradCAM, LIME, RISE, Squaregrid, and direct Gradient approaches (Vanilla, Smooth, Integrated). Main results: Among the deep neural networks architectures evaluated for this image classification task, VGG16 was shown to be most affected by biases towards spurious artifacts, while DenseNet was notably more robust against them. Further impacts: Results further show that small differences in validation accuracies can cause drastic changes in explanation heatmaps for DenseNet architectures, indicating that small changes in validation accuracy may have large impacts on the biases learned by the networks. Notably, it is important to notice that the strong performance metrics achieved by all these networks (Accuracy, F1 score, AUC all in the 80 to 90% range) could give users the erroneous impression that there is no bias. However, the analysis of the explanation heatmaps highlights the bias.

References Powered by Scopus

ImageNet: A Large-Scale Hierarchical Image Database

51389Citations
9799Readers
Get full text

Densely connected convolutional networks

28663Citations
17064Readers
Get full text

Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization

15327Citations
1631Readers
Get full text

Cited by Powered by Scopus

This article is free to access.

56Citations
231Readers

This article is free to access.

Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

de Sousa, I. P., Vellasco, M. M. B. R., & da Silva, E. C. (2021). Explainable artificial intelligence for bias detection in covid ct-scan classifiers. Sensors, 21(16). https://doi.org/10.3390/s21165657

Readers over time

‘21‘22‘23‘24‘2508162432

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 17

50%

Lecturer / Post doc 9

26%

Professor / Associate Prof. 4

12%

Researcher 4

12%

Readers' Discipline

Tooltip

Computer Science 17

59%

Engineering 6

21%

Nursing and Health Professions 3

10%

Medicine and Dentistry 3

10%

Article Metrics

Tooltip
Mentions
Blog Mentions: 1

Save time finding and organizing research with Mendeley

Sign up for free
0