Automatic classification of pigmented, non-pigmented, and depigmented non-melanocytic skin lesions have garnered lots of attention in recent years. However, imaging variations in skin texture, lesion shape, depigmentation contrast, lighting condition, etc. hinder robust feature extraction, affecting classification accuracy. In this paper, we propose a new deep neural network that exploits input data for robust feature extraction. Specifically, we analyze the convolutional network’s behavior (field-of-view) to find the location of deep supervision for improved feature extraction. To achieve this, first we perform activation mapping to generate an object mask, highlighting the input regions most critical for classification output generation. Then the network layer whose layer-wise effective receptive field matches the approximated object shape in the object mask is selected as our focus for deep supervision. Utilizing different types of convolutional feature extractors and classifiers on three melanoma detection datasets and two vitiligo detection datasets, we verify the effectiveness of our new method.
CITATION STYLE
Mishra, S., Zhang, Y., Zhang, L., Zhang, T., Hu, X. S., & Chen, D. Z. (2022). Data-Driven Deep Supervision for Skin Lesion Classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13431 LNCS, pp. 721–731). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-16431-6_68
Mendeley helps you to discover research relevant for your work.