Robust Segmentation Models Using an Uncertainty Slice Sampling-Based Annotation Workflow

9Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Semantic segmentation neural networks require pixel-level annotations in large quantities to achieve a good performance. In the medical domain, such annotations are expensive because they are time-consuming and require expert knowledge. Active learning optimizes the annotation effort by devising strategies to select cases for labeling that are the most informative to the model. In this work, we propose an uncertainty slice sampling (USS) strategy for the semantic segmentation of 3D medical volumes that selects 2D image slices for annotation and we compare it with various other strategies. We demonstrate the efficiency of USS on a CT liver segmentation task using multisite data. After five iterations, the training data resulting from USS consisted of 2410 slices (4% of all slices in the data pool) compared to 8121(13%), 8641(14%), and 3730(6%) slices for uncertainty volume (UVS), random volume (RVS), and random slice (RSS) sampling, respectively. Despite being trained on the smallest amount of data, the model based on the USS strategy evaluated on 234 test volumes significantly outperformed models trained according to the UVS, RVS, and RSS strategies and achieved a mean Dice index of 0.964, a relative volume error of 4.2%, a mean surface distance of 1.35mm, and a Hausdorff distance of 23.4mm. This was only slightly inferior to 0.967, 3.8%, 1.18mm, and 22.9mm achieved by a model trained on all available data. Our robustness analysis using the 5th percentile of Dice and the 95th percentile of the remaining metrics demonstrated that USS not only resulted in the most robust model compared to other strategies, but also outperformed the model trained on all data according to the 5th percentile of Dice (0.946 vs. 0.945) and the 95th percentile of mean surface distance (1.92mm vs. 2.03mm).

References Powered by Scopus

U-net: Convolutional networks for biomedical image segmentation

65064Citations
N/AReaders
Get full text

3D U-net: Learning dense volumetric segmentation from sparse annotation

5157Citations
N/AReaders
Get full text

nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation

4299Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Improving automatic liver tumor segmentation in late-phase MRI using multi-model training and 3D convolutional neural networks

25Citations
N/AReaders
Get full text

A review of uncertainty quantification in medical image analysis: Probabilistic and non-probabilistic methods

10Citations
N/AReaders
Get full text

An integrated 3D-sparse deep belief network with enriched seagull optimization algorithm for liver segmentation

4Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Chlebus, G., Schenk, A., Hahn, H. K., Van Ginneken, B., & Meine, H. (2022). Robust Segmentation Models Using an Uncertainty Slice Sampling-Based Annotation Workflow. IEEE Access, 10, 4728–4738. https://doi.org/10.1109/ACCESS.2022.3141021

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 4

67%

Researcher 2

33%

Readers' Discipline

Tooltip

Computer Science 4

67%

Social Sciences 1

17%

Engineering 1

17%

Save time finding and organizing research with Mendeley

Sign up for free