Cross-Modal Attention-Guided Convolutional Network for Multi-modal Cardiac Segmentation

5Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To leverage the correlated information between modalities to benefit the cross-modal segmentation, we propose a novel cross-modal attention-guided convolutional network for multi-modal cardiac segmentation. In particular, we first employed the cycle-consistency generative adversarial networks to complete the bidirectional image generation (i.e., MR to CT, CT to MR) to help reduce the modal-level inconsistency. Then, with the generated and original MR and CT images, a novel convolutional network is proposed where (1) two encoders learn individual features separately and (2) a common decoder learns shareable features between modalities for a final consistent segmentation. Also, we propose a cross-modal attention module between the encoders and decoder in order to leverage the correlated information between modalities. Our model can be trained in an end-to-end manner. With extensive evaluation on the unpaired CT and MR cardiac images, our method outperforms the baselines in terms of the segmentation performance.

Cite

CITATION STYLE

APA

Zhou, Z., Guo, X., Yang, W., Shi, Y., Zhou, L., Wang, L., & Yang, M. (2019). Cross-Modal Attention-Guided Convolutional Network for Multi-modal Cardiac Segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11861 LNCS, pp. 601–610). Springer. https://doi.org/10.1007/978-3-030-32692-0_69

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free