Multimodal Cardiac Segmentation Using Disentangled Representation Learning

4Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Magnetic Resonance (MR) protocols use several sequences to evaluate pathology and organ status. Yet, despite recent advances, the analysis of each sequence’s images (modality hereafter) is treated in isolation. We propose a method suitable for multimodal and multi-input learning and analysis, that disentangles anatomical and imaging factors, and combines anatomical content across the modalities to extract more accurate segmentation masks. Mis-registrations between the inputs are handled with a Spatial Transformer Network, which non-linearly aligns the (now intensity-invariant) anatomical factors. We demonstrate applications in Late Gadolinium Enhanced (LGE) and cine MRI segmentation. We show that multi-input outperforms single-input models, and that we can train a (semi-supervised) model with few (or no) annotations for one of the modalities. Code is available at https://github.com/agis85/multimodal_segmentation.

Cite

CITATION STYLE

APA

Chartsias, A., Papanastasiou, G., Wang, C., Stirrat, C., Semple, S., Newby, D., … Tsaftaris, S. A. (2020). Multimodal Cardiac Segmentation Using Disentangled Representation Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12009 LNCS, pp. 128–137). Springer. https://doi.org/10.1007/978-3-030-39074-7_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free