Collaborative human-AI (CHAI): Evidence-based interpretable melanoma classification in dermoscopic images

28Citations
Citations of this article
52Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Automated dermoscopic image analysis has witnessed rapid growth in diagnostic performance. Yet adoption faces resistance, in part, because no evidence is provided to support decisions. In this work, an approach for evidence-based classification is presented. A feature embedding is learned with CNNs, triplet-loss, and global average pooling, and used to classify via kNN search. Evidence is provided as both the discovered neighbors, as well as localized image regions most relevant to measuring distance between query and neighbors. To ensure that results are relevant in terms of both label accuracy and human visual similarity for any skill level, a novel hierarchical triplet logic is implemented to jointly learn an embedding according to disease labels and non-expert similarity. Results are improved over baselines trained on disease labels alone, as well as standard multiclass loss. Quantitative relevance of results, according to non-expert similarity, as well as localized image regions, are also significantly improved.

Cite

CITATION STYLE

APA

Codella, N. C. F., Lin, C. C., Halpern, A., Hind, M., Feris, R., & Smith, J. R. (2018). Collaborative human-AI (CHAI): Evidence-based interpretable melanoma classification in dermoscopic images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11038 LNCS, pp. 97–105). Springer Verlag. https://doi.org/10.1007/978-3-030-02628-8_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free