The Impact of Simultaneous Adversarial Attacks on Robustness of Medical Image Analysis

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Deep learning models are widely used in healthcare systems. However, deep learning models are vulnerable to attacks themselves. Significantly, due to the black-box nature of the deep learning model, it is challenging to detect attacks. Furthermore, due to data sensitivity, such adversarial attacks in healthcare systems are considered potential security and privacy threats. In this paper, we provide a comprehensive analysis of adversarial attacks on medical image analysis, including two adversary methods, FGSM and PGD, applied to an entire image or partial image. The partial attack comes in various sizes, either the individual or combinational format of attack. We use three medical datasets to examine the impact of the model's accuracy and robustness. Finally, we provide a complete implementation of the attacks and discuss the results. Our results indicate the weakness and robustness of four deep learning models and exhibit how varying perturbations stimulate model behaviour regarding the specific area and critical features.

Cite

CITATION STYLE

APA

Pal, S., Rahman, S., Beheshti, M., Habib, A., Jadidi, Z., & Karmakar, C. (2024). The Impact of Simultaneous Adversarial Attacks on Robustness of Medical Image Analysis. IEEE Access, 12, 66478–66494. https://doi.org/10.1109/ACCESS.2024.3396566

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free