Scribble2D5: Weakly-Supervised Volumetric Image Segmentation via Scribble Annotations

2Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Image segmentation using weak annotations like scribbles has gained great attention, since such annotations are easier to obtain compared to time-consuming and labor-intensive labeling at the pixel/voxel level. However, scribbles lack structure information of the region of interest (ROI), thus existing scribble-based methods suffer from poor boundary localization. Moreover, current methods are mostly designed for 2D image segmentation, which do not fully leverage volumetric information. In this paper, we propose a scribble-based volumetric image segmentation, Scribble2D5, which tackles 3D anisotropic image segmentation and improves boundary predictions. To achieve this, we augment a 2.5D attention UNet with a proposed label propagation module to extend semantic information from scribbles and a combination of static and active boundary prediction to learn ROI’s boundaries and regularize its shape. Extensive experiments on three public datasets demonstrate Scribble2D5 significantly outperforms existing scribble-based methods and approaches the performance of fully-supervised ones. Our code is available at https://github.com/Qybc/Scribble2D5.

Cite

CITATION STYLE

APA

Chen, Q., & Hong, Y. (2022). Scribble2D5: Weakly-Supervised Volumetric Image Segmentation via Scribble Annotations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13438 LNCS, pp. 234–243). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-16452-1_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free