This paper presents and evaluates a deep learning architecture for automated basal cell carcinoma cancer detection that integrates (1) image representation learning, (2) image classification and (3) result interpretability. A novel characteristic of this approach is that it extends the deep learning architecture to also include an interpretable layer that highlights the visual patterns that contribute to discriminate between cancerous and normal tissues patterns, working akin to a digital staining which spotlights image regions important for diagnostic decisions. Experimental evaluation was performed on set of 1,417 images from 308 regions of interest of skin histopathology slides, where the presence of absence of basal cell carcinoma needs to be determined. Different image representation strategies, including bag of features (BOF), canonical (discrete cosine transform (DCT) and Haar-based wavelet transform (Haar)) and proposed learned-from-data representations, were evaluated for comparison. Experimental results show that the representation learned from a large histology image data set has the best overall performance (89.4% in F-measure and 91.4% in balanced accuracy), which represents an improvement of around 7% over canonical representations and 3% over the best equivalent BOF representation. © 2013 Springer-Verlag.
CITATION STYLE
Cruz-Roa, A. A., Arevalo Ovalle, J. E., Madabhushi, A., & González Osorio, F. A. (2013). A deep learning architecture for image representation, visual interpretability and automated basal-cell carcinoma cancer detection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8150 LNCS, pp. 403–410). https://doi.org/10.1007/978-3-642-40763-5_50
Mendeley helps you to discover research relevant for your work.