Rising mortality rates in recent years have elevated melanoma to the ranks of the world’s most lethal cancers. Dermoscopy images (DIs) have been used in smart healthcare applications to determine medical features using deep transfer learning (DTL). DI-related lesions are widespread, have local features, and are associated with uncertainty. There are three components to our bi-branch parallel model: (1) the Transformer module (TM), (2) the self-attention unit (SAU), and (3) a convolutional neural network (CNN). With CNN and TM able to extract local and global features, respectively, a novel model has been developed to fuse global and local features using cross-fusion to generate fine-grained features. Parallel systems between the branches are merged using a feature-fusion architecture, resulting in a pattern that identifies the characteristics of a variety of lesions. Moreover, this paper proposes an optimized and lightweight CNN architecture version (optResNet-18) that discriminates skin cancer lesions with high accuracy. To verify the proposed method, the procedure evaluated the accuracy for the ISIC-2019 and the PH2 datasets as 97.48 and 96.87%, respectively, a significant difference over traditional CNN networks (e.g., ResNet-50 and ResNet-101) and the TM. The proposed model outperforms state-of-the-art performance metrics such as AUC, F1-score, specificity, precision, and recall. The proposed method can also be used as a generalizable model to diagnose different lesions in DIs with smart healthcare applications by combining DTL and medical imaging. With the proposed e-Health platform, skin diseases can be detected in real-time, which is crucial to speedy and reliable diagnostics.
CITATION STYLE
Rezaee, K., & Zadeh, H. G. (2024). Self-attention transformer unit-based deep learning framework for skin lesions classification in smart healthcare. Discover Applied Sciences, 6(1). https://doi.org/10.1007/s42452-024-05655-1
Mendeley helps you to discover research relevant for your work.