MultiSDGAN: Translation of OCT Images to Superresolved Segmentation Labels Using Multi-Discriminators in Multi-Stages

5Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Optical coherence tomography (OCT) has been identified as a non-invasive and inexpensive imaging modality to discover potential biomarkers for Alzheimer's diagnosis and progress determination. Current hypotheses presume the thickness of the retinal layers, which are analyzable within OCT scans, as an effective biomarker for the presence of Alzheimer's. As a logical first step, this work concentrates on the accurate segmentation of retinal layers to isolate the layers for further analysis. This paper proposes a generative adversarial network (GAN) that concurrently learns to increase the image resolution for higher clarity and then segment the retinal layers. We propose a multi-stage and multi-discriminatory generative adversarial network (MultiSDGAN) specifically for superresolution and segmentation of OCT scans of the retinal layer. The resulting generator is adversarially trained against multiple discriminator networks at multiple stages. We aim to avoid early saturation of generator model training leading to poor segmentation accuracies and enhance the process of OCT domain translation by satisfying all the discriminators in multiple scales. We also investigated incorporating the Dice loss and Structured Similarity Index Measure (SSIM) as additional loss functions to specifically target and improve our proposed GAN architecture's segmentation and superresolution performance, respectively. The ablation study results conducted on our data set suggest that the proposed MultiSDGAN with ten-fold cross-validation (10-CV) provides a reduced equal error rate with 44.24% and 34.09% relative improvements, respectively (p-values of the improvement level tests$< $.01). Furthermore, our experimental results also demonstrate that the addition of the new terms to the loss function improves the segmentation results significantly by relative improvements of 31.33% (p-value$<$.01).

Cite

CITATION STYLE

APA

Jeihouni, P., Dehzangi, O., Amireskandari, A., Rezai, A., & Nasrabadi, N. M. (2022). MultiSDGAN: Translation of OCT Images to Superresolved Segmentation Labels Using Multi-Discriminators in Multi-Stages. IEEE Journal of Biomedical and Health Informatics, 26(4), 1614–1627. https://doi.org/10.1109/JBHI.2021.3110265

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free