Crowd disagreement about medical images is informative

8Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Classifiers for medical image analysis are often trained with a single consensus label, based on combining labels given by experts or crowds. However, disagreement between annotators may be informative, and thus removing it may not be the best strategy. As a proof of concept, we predict whether a skin lesion from the ISIC 2017 dataset is a melanoma or not, based on crowd annotations of visual characteristics of that lesion. We compare using the mean annotations, illustrating consensus, to standard deviations and other distribution moments, illustrating disagreement. We show that the mean annotations perform best, but that the disagreement measures are still informative. We also make the crowd annotations used in this paper available at https://figshare.com/s/5cbbce14647b66286544.

Cite

CITATION STYLE

APA

Cheplygina, V., & Pluim, J. P. W. (2018). Crowd disagreement about medical images is informative. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11043 LNCS, pp. 105–111). Springer Verlag. https://doi.org/10.1007/978-3-030-01364-6_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free