Convolutional neural networks (CNNs) have gained popularity for Internet-of-Healthcare (IoH) applications such as medical diagnostics. However, new research shows that adversarial attacks with slight imperceptible changes can undermine deep neural network techniques in healthcare. This raises questions regarding the safety of deploying these IoH devices in clinical situations. In this paper, we review the techniques used in fighting against cyber-attacks. Then, we propose to study the robustness of some well-known CNN architectures' belonging to sequential, parallel, and residual families, such as LeNet5, MobileNetV1, VGG16, ResNet50, and InceptionV3 against fast gradient sign method (FGSM) and projected gradient descent (PGD) attacks, in the context of classification of chest radiographs (X-rays) based on the IoH application. Finally, we propose to improve the security of these CNN structures by studying standard and adversarial training. The results show that, among these models, smaller models with lower computational complexity are more secure against hostile threats than larger models that are frequently used in IoH applications. In contrast, we reveal that when these networks are learned adversarially, they can outperform standard trained networks. The experimental results demonstrate that the model performance breakpoint is represented by γ =0.3 with a maximum loss of accuracy tolerated at 2%.
CITATION STYLE
Khriji, L., Bouaafia, S., Messaoud, S., Ammari, A. C., & MacHhout, M. (2023). Secure Convolutional Neural Network-Based Internet-of-Healthcare Applications. IEEE Access, 11, 36787–36804. https://doi.org/10.1109/ACCESS.2023.3266586
Mendeley helps you to discover research relevant for your work.