Quantifying the uncertainty of deep learning-based computer-aided diagnosis for patient safety

10Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

In this work, we discuss epistemic uncertainty estimation obtained by Bayesian inference in diagnostic classifiers and show that the prediction uncertainty highly correlates with goodness of prediction. We train the ResNet-18 image classifier on a dataset of 84,484 optical coherence tomography scans showing four different retinal conditions. Dropout is added before every building block of ResNet, creating an approximation to a Bayesian classifier. Monte Carlo sampling is applied with dropout at test time for uncertainty estimation. In Monte Carlo experiments, multiple forward passes are performed to get a distribution of the class labels. The variance and the entropy of the distribution is used as metrics for uncertainty. Our results show strong correlation with ρ = 0.99 between prediction uncertainty and prediction error. Mean uncertainty of incorrectly diagnosed cases was significantly higher than mean uncertainty of correctly diagnosed cases. Modeling of the prediction uncertainty in computer-aided diagnosis with deep learning yields more reliable results and is therefore expected to increase patient safety. This will help to transfer such systems into clinical routine and to increase the acceptance of machine learning in diagnosis from the standpoint of physicians and patients.

Cite

CITATION STYLE

APA

Laves, M. H., Ihler, S., Ortmaier, T., & Kahrs, L. A. (2019). Quantifying the uncertainty of deep learning-based computer-aided diagnosis for patient safety. In Current Directions in Biomedical Engineering (Vol. 5, pp. 223–226). Walter de Gruyter GmbH. https://doi.org/10.1515/cdbme-2019-0057

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free