Tumorgan: A multi-modal data augmentation framework for brain tumor segmentation

88Citations
Citations of this article
80Readers
Mendeley users who have this article in their library.

Abstract

The high human labor demand involved in collecting paired medical imaging data severely impedes the application of deep learning methods to medical image processing tasks such as tumor segmentation. The situation is further worsened when collecting multi-modal image pairs. However, this issue can be resolved through the help of generative adversarial networks, which can be used to generate realistic images. In this work, we propose a novel framework, named TumorGAN, to generate image segmentation pairs based on unpaired adversarial training. To improve the quality of the generated images, we introduce a regional perceptual loss to enhance the performance of the discriminator. We also develop a regional L1 loss to constrain the color of the imaged brain tissue. Finally, we verify the performance of TumorGAN on a public brain tumor data set, BraTS 2017. The experimental results demonstrate that the synthetic data pairs generated by our proposed method can practically improve tumor segmentation performance when applied to segmentation network training.

Cite

CITATION STYLE

APA

Li, Q., Yu, Z., Wang, Y., & Zheng, H. (2020, August 1). Tumorgan: A multi-modal data augmentation framework for brain tumor segmentation. Sensors (Switzerland). MDPI AG. https://doi.org/10.3390/s20154203

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free