A Vesselsegmentation-based CycleGAN for Unpaired Multi-modal Retinal Image Synthesis

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Unpaired image-to-image translation of retinal images can efficiently increase the training dataset for deep-learning-based multi-modal retinal registration methods. Our method integrates a vessel segmentation network into the image-to-image translation task by extending the CycleGAN framework. The segmentation network is inserted prior to a UNet vision transformer generator network and serves as a shared representation between both domains. We reformulate the original identity loss to learn the direct mapping between the vessel segmentation and the real image. Additionally, we add a segmentation loss termto ensure shared vessel locations between fake and real images. In the experiments, our method shows a visually realistic look and preserves the vessel structures, which is a prerequisite for generating multi-modal training data for image registration.

Cite

CITATION STYLE

APA

Sindel, A., Maier, A., & Christlein, V. (2023). A Vesselsegmentation-based CycleGAN for Unpaired Multi-modal Retinal Image Synthesis. In Informatik aktuell (pp. 33–37). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-658-41657-7_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free