Towards safe deep learning: Accurately quantifying biomarker uncertainty in neural network predictions

47Citations
Citations of this article
81Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Automated medical image segmentation, specifically using deep learning, has shown outstanding performance in semantic segmentation tasks. However, these methods rarely quantify their uncertainty, which may lead to errors in downstream analysis. In this work we propose to use Bayesian neural networks to quantify uncertainty within the domain of semantic segmentation. We also propose a method to convert voxel-wise segmentation uncertainty into volumetric uncertainty, and calibrate the accuracy and reliability of confidence intervals of derived measurements. When applied to a tumour volume estimation application, we demonstrate that by using such modelling of uncertainty, deep learning systems can be made to report volume estimates with well-calibrated error-bars, making them safer for clinical use. We also show that the uncertainty estimates extrapolate to unseen data, and that the confidence intervals are robust in the presence of artificial noise. This could be used to provide a form of quality control and quality assurance, and may permit further adoption of deep learning tools in the clinic.

Cite

CITATION STYLE

APA

Eaton-Rosen, Z., Bragman, F., Bisdas, S., Ourselin, S., & Cardoso, M. J. (2018). Towards safe deep learning: Accurately quantifying biomarker uncertainty in neural network predictions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11070 LNCS, pp. 691–699). Springer Verlag. https://doi.org/10.1007/978-3-030-00928-1_78

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free