3D U-Net : Learning Dense Volumetric

  • Cicek O
  • Abdulkadir A
  • Lienkamp S
  • et al.
ISSN: 10636919
N/ACitations
Citations of this article
153Readers
Mendeley users who have this article in their library.

Abstract

This paper introduces a network for volumetric segmenta-tion that learns from sparsely annotated volumetric images. We out-line two attractive use cases of this method: (1) In a semi-automated setup, the user annotates some slices in the volume to be segmented. The network learns from these sparse annotations and provides a dense 3D segmentation. (2) In a fully-automated setup, we assume that a rep-resentative, sparsely annotated training set exists. Trained on this data set, the network densely segments new volumetric images. The proposed network extends the previous u-net architecture from Ronneberger et al. by replacing all 2D operations with their 3D counterparts. The im-plementation performs on-the-fly elastic deformations for efficient data augmentation during training. It is trained end-to-end from scratch, i.e., no pre-trained network is required. We test the performance of the pro-posed method on a complex, highly variable 3D structure, the Xenopus kidney, and achieve good results for both use cases.

Cite

CITATION STYLE

APA

Cicek, O., Abdulkadir, A., Lienkamp, S. S., Brox, T., & Ronneberger, O. (2016). 3D U-Net : Learning Dense Volumetric. Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016, 424–432. Retrieved from http://arxiv.org/abs/1606.06650

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free