Adversarial image synthesis for unpaired multi-modal cardiac data

141Citations
Citations of this article
117Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper demonstrates the potential for synthesis of medical images in one modality (e.g. MR) from images in another (e.g. CT) using a CycleGAN [24] architecture. The synthesis can be learned from unpaired images, and applied directly to expand the quantity of available training data for a given task. We demonstrate the application of this approach in synthesising cardiac MR images from CT images, using a dataset of MR and CT images coming from different patients. Since there can be no direct evaluation of the synthetic images, as no ground truth images exist, we demonstrate their utility by leveraging our synthetic data to achieve improved results in segmentation. Specifically, we show that training on both real and synthetic data increases accuracy by 15% compared to real data. Additionally, our synthetic data is of sufficient quality to be used alone to train a segmentation neural network, that achieves 95% of the accuracy of the same model trained on real data.

Author supplied keywords

Cite

CITATION STYLE

APA

Chartsias, A., Joyce, T., Dharmakumar, R., & Tsaftaris, S. A. (2017). Adversarial image synthesis for unpaired multi-modal cardiac data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10557 LNCS, pp. 3–13). Springer Verlag. https://doi.org/10.1007/978-3-319-68127-6_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free