Spatial-Intensity Transform GANs for High Fidelity Medical Image-to-Image Translation

2Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite recent progress in image-to-image translation, it remains challenging to apply such techniques to clinical quality medical images. We develop a novel parameterization of conditional generative adversarial networks that achieves high image fidelity when trained to transform MRIs conditioned on a patient’s age and disease severity. The spatial-intensity transform generative adversarial network (SIT-GAN) constrains the generator to a smooth spatial transform composed with sparse intensity changes. This technique improves image quality and robustness to artifacts, and generalizes to different scanners. We demonstrate SIT-GAN on a large clinical image dataset of stroke patients, where it captures associations between ventricle expansion and aging, as well as between white matter hyperintensities and stroke severity. Additionally, SIT-GAN provides a disentangled view of the variation in shape and appearance across subjects.

Cite

CITATION STYLE

APA

Wang, C. J., Rost, N. S., & Golland, P. (2020). Spatial-Intensity Transform GANs for High Fidelity Medical Image-to-Image Translation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12262 LNCS, pp. 749–759). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59713-9_72

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free