Registered multimodal images are lacking in many medical image processing tasks. To obtain sufficient registered multimodal data, in this paper, we propose a new unsupervised scheme for medical image translation based on cycle generative adversarial networks (CycleGAN), which can generate registered multimodal from single modality and retain the lesion information. We improve parameter initialization method, upsampling method and loss items to speed up model training and improve translation quality. Compared with previous studies that focus only on the overall quality of translation, we attach more importance to the lesions information in medical images, so we propose a method for the preservation of lesions information in the translation process. We perform a series of multimodal translation experiments on the BRATS2015 dataset, verify the effect of each of our improvements as well as the consistency of the lesions information between translation images and original images. And we also verify the effectiveness and availability of the lesions information in translation images.
CITATION STYLE
Qu, Y., Deng, C., Su, W., Wang, Y., Lu, Y., & Chen, Z. (2020). Multimodal Brain MRI Translation Focused on Lesions. In ACM International Conference Proceeding Series (pp. 352–359). Association for Computing Machinery. https://doi.org/10.1145/3383972.3384024
Mendeley helps you to discover research relevant for your work.