3D anisotropic hybrid network: Transferring convolutional features from 2D images to 3D anisotropic volumes

N/ACitations
Citations of this article
182Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

While deep convolutional neural networks (CNN) have been successfully applied to 2D image analysis, it is still challenging to apply them to 3D medical images, especially when the within-slice resolution is much higher than the between-slice resolution. We propose a 3D Anisotropic Hybrid Network (AH-Net) that transfers convolutional features learned from 2D images to 3D anisotropic volumes. Such a transfer inherits the desired strong generalization capability for within-slice information while naturally exploiting between-slice information for more effective modelling. We experiment with the proposed 3D AH-Net on two different medical image analysis tasks, namely lesion detection from a Digital Breast Tomosynthesis volume, and liver and liver tumor segmentation from a Computed Tomography volume and obtain state-of-the-art results.

Cite

CITATION STYLE

APA

Liu, S., Xu, D., Zhou, S. K., Pauly, O., Grbic, S., Mertelmeier, T., … Comaniciu, D. (2018). 3D anisotropic hybrid network: Transferring convolutional features from 2D images to 3D anisotropic volumes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11071 LNCS, pp. 851–858). Springer Verlag. https://doi.org/10.1007/978-3-030-00934-2_94

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free