Unified Attentional Generative Adversarial Network for Brain Tumor Segmentation from Multimodal Unpaired Images

8Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In medical applications, the same anatomical structures may be observed in multiple modalities despite the different image characteristics. Currently, most deep models for multimodal segmentation rely on paired registered images. However, multimodal paired registered images are difficult to obtain in many cases. Therefore, developing a model that can segment the target objects from different modalities with unpaired images is significant for many clinical applications. In this work, we propose a novel two-stream translation and segmentation unified attentional generative adversarial network (UAGAN), which can perform any-to-any image modality translation and segment the target objects simultaneously in the case where two or more modalities are available. The translation stream is used to capture modality-invariant features of the target anatomical structures. In addition, to focus on segmentation-related features, we add attentional blocks to extract valuable features from the translation stream. Experiments on three-modality brain tumor segmentation indicate that UAGAN outperforms the existing methods in most cases.

Cite

CITATION STYLE

APA

Yuan, W., Wei, J., Wang, J., Ma, Q., & Tasdizen, T. (2019). Unified Attentional Generative Adversarial Network for Brain Tumor Segmentation from Multimodal Unpaired Images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11766 LNCS, pp. 229–237). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-32248-9_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free