Multimodal Brain MRI Translation Focused on Lesions

5Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Registered multimodal images are lacking in many medical image processing tasks. To obtain sufficient registered multimodal data, in this paper, we propose a new unsupervised scheme for medical image translation based on cycle generative adversarial networks (CycleGAN), which can generate registered multimodal from single modality and retain the lesion information. We improve parameter initialization method, upsampling method and loss items to speed up model training and improve translation quality. Compared with previous studies that focus only on the overall quality of translation, we attach more importance to the lesions information in medical images, so we propose a method for the preservation of lesions information in the translation process. We perform a series of multimodal translation experiments on the BRATS2015 dataset, verify the effect of each of our improvements as well as the consistency of the lesions information between translation images and original images. And we also verify the effectiveness and availability of the lesions information in translation images.

Cite

CITATION STYLE

APA

Qu, Y., Deng, C., Su, W., Wang, Y., Lu, Y., & Chen, Z. (2020). Multimodal Brain MRI Translation Focused on Lesions. In ACM International Conference Proceeding Series (pp. 352–359). Association for Computing Machinery. https://doi.org/10.1145/3383972.3384024

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free