How Effective is Adversarial Training of CNNs in Medical Image Analysis?

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Adversarial attacks are carefully crafted inputs that can deceive machine learning models into giving wrong results with seemingly high confidence. One approach that is commonly used in the image analysis literature to defend against such attacks is the introduction of adversarial images during training time, i.e. adversarial training. However, the effectiveness of adversarial training remains unclear in the healthcare domain, where the use of complex medical scans is crucial for a wide range of clinical workflows. In this paper, we carried out an empirical investigation into the effectiveness of adversarial training as a defence technique in the context of medical images. We demonstrated that adversarial training is, in principle, a transferable defence on medical imaging data, and that it can potentially be used on attacks previously unseen by the model. We also empirically showed that the strength of the attack, determined by the parameter ϵ, and the percentage of adversarial images included during training, have key influence over the level of success of the defence. Our analysis was carried out using 58,954 images from the publicly available MedNIST benchmarking dataset.

Cite

CITATION STYLE

APA

Xie, Y., & Fetit, A. E. (2022). How Effective is Adversarial Training of CNNs in Medical Image Analysis? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13413 LNCS, pp. 443–457). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-12053-4_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free