Bridging the Gap Between 2D and 3D Organ Segmentation with Volumetric Fusion Net

61Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

There has been a debate on whether to use 2D or 3D deep neural networks for volumetric organ segmentation. Both 2D and 3D models have their advantages and disadvantages. In this paper, we present an alternative framework, which trains 2D networks on different viewpoints for segmentation, and builds a 3D Volumetric Fusion Net (VFN) to fuse the 2D segmentation results. VFN is relatively shallow and contains much fewer parameters than most 3D networks, making our framework more efficient at integrating 3D information for segmentation. We train and test the segmentation and fusion modules individually, and propose a novel strategy, named cross-cross-augmentation, to make full use of the limited training data. We evaluate our framework on several challenging abdominal organs, and verify its superiority in segmentation accuracy and stability over existing 2D and 3D approaches.

Cite

CITATION STYLE

APA

Xia, Y., Xie, L., Liu, F., Zhu, Z., Fishman, E. K., & Yuille, A. L. (2018). Bridging the Gap Between 2D and 3D Organ Segmentation with Volumetric Fusion Net. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11073 LNCS, pp. 445–453). Springer Verlag. https://doi.org/10.1007/978-3-030-00937-3_51

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free