Deep Multi-Modal Encoder-Decoder Networks for Shape Constrained Segmentation and Joint Representation Learning

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep learning approaches have been very successful in segmenting cardiac structures from CT and MR volumes. Despite continuous progress, automated segmentation of these structures remains challenging due to highly complex regional characteristics (e.g. homogeneous gray-level transitions) and large anatomical shape variability. To cope with these challenges, the incorporation of shape priors into neural networks for robust segmentation is an important area of current research. We propose a novel approach that leverages shared information across imaging modalities and shape segmentations within a unified multi-modal encoder-decoder network. This jointly end-to-end trainable architecture is advantageous in improving robustness due to strong shape constraints and enables further applications due to smooth transitions in the learned shape space. Despite no skip connections are used and all shape information is encoded in a low-dimensional representation, our approach achieves high-accuracy segmentation and consistent shape interpolation results on the multi-modal whole heart segmentation dataset.

Cite

CITATION STYLE

APA

Bouteldja, N., Merhof, D., Ehrhardt, J., & Heinrich, M. P. (2019). Deep Multi-Modal Encoder-Decoder Networks for Shape Constrained Segmentation and Joint Representation Learning. In Informatik aktuell (pp. 23–28). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-658-25326-4_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free