Abstract
We present a study on the fusion of pixel data and patient metadata (age, gender, and body location) for improving the classification of skin lesion images. The experiments have been conducted with the ISIC 2019 skin lesion classification challenge data set. Taking two plain convolutional neural networks (CNNs) as a baseline, metadata are merged using either non-neural machine learning methods (tree-based and support vector machines) or shallow neural networks. Results show that shallow neural networks outperform other approaches in all overall evaluation measures. However, despite the increase in the classification accuracy (up to +19.1%), interestingly, the average per-class sensitivity decreases in three out of four cases for CNNs, thus suggesting that using metadata penalizes the prediction accuracy for lower represented classes. A study on the patient metadata shows that age is the most useful metadatum as a decision criterion, followed by body location and gender.
Author supplied keywords
Cite
CITATION STYLE
Nunnari, F., Bhuvaneshwara, C., Ezema, A. O., & Sonntag, D. (2020). A Study on the Fusion of Pixels and Patient Metadata in CNN-Based Classification of Skin Lesion Images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12279 LNCS, pp. 191–208). Springer. https://doi.org/10.1007/978-3-030-57321-8_11
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.