Aggregating deep convolutional features for melanoma recognition in dermoscopy images

27Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a novel framework for automated melanoma recognition in dermoscorpy images, which is a quite challenging task due to the high intra-class and low inter-class variations between melanoma and non-melanoma (benign). The proposed framework shares merits of deep learning method and local descriptors encoding strategy. Specifically, the deep representations of a dermoscopy image are first extracted using a very deep residual neural network pre-trained on ImageNet. Then these local deep descriptors are aggregated by fisher vector (FV) encoding to build a holistic image representation. Finally, the encoded representations are classified using SVM. In contrast to previous studies with complex preprocessing and feature engineering or directly using existing deep learning architectures with fine-tuning on the skin datasets, our solution is simpler, more compact and capable of producing more discriminative features. Extensive experiments performed on ISBI 2016 Skin lesion challenge dataset corroborate the effectiveness of the proposed method, outperforming state-of-the-art approaches in all evaluation metrics.

Cite

CITATION STYLE

APA

Yu, Z., Jiang, X., Wang, T., & Lei, B. (2017). Aggregating deep convolutional features for melanoma recognition in dermoscopy images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10541 LNCS, pp. 238–246). Springer Verlag. https://doi.org/10.1007/978-3-319-67389-9_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free