Deep Class-Specific Affinity-Guided Convolutional Network for Multimodal Unpaired Image Segmentation

9Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multi-modal medical image segmentation plays an essential role in clinical diagnosis. It remains challenging as the input modalities are often not well-aligned spatially. Existing learning-based methods mainly consider sharing trainable layers across modalities and minimizing visual feature discrepancies. While the problem is often formulated as joint supervised feature learning, multiple-scale features and class-specific representation have not yet been explored. In this paper, we propose an affinity-guided fully convolutional network for multimodal image segmentation. To learn effective representations, we design class-specific affinity matrices to encode the knowledge of hierarchical feature reasoning, together with the shared convolutional layers to ensure the cross-modality generalization. Our affinity matrix does not depend on spatial alignments of the visual features and thus allows us to train with unpaired, multimodal inputs. We extensively evaluated our method on two public multimodal benchmark datasets and outperform state-of-the-art methods.

Cite

CITATION STYLE

APA

Chen, J., Li, W., Li, H., & Zhang, J. (2020). Deep Class-Specific Affinity-Guided Convolutional Network for Multimodal Unpaired Image Segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12264 LNCS, pp. 187–196). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59719-1_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free