Automated FAZ segmentation and diabetic retinopathy classification using OCTA images

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Background: Accurate segmentation of the foveal avascular zone (FAZ) is valuable for retinal imaging, as FAZ alterations are key biomarkers for diabetic retinopathy (DR). This study presents an automated framework exploring the feasibility of FAZ segmentation and DR classification using optical coherence tomography angiography (OCTA) images. Methods: In this cross-sectional study conducted at Farabi Eye Hospital, Tehran, Iran, a two-step deep learning pipeline was developed. First, a neural network combining DeepLabv3+, EfficientNetB0, Squeeze-and-Excitation (SE) blocks, and Atrous Spatial Pyramid Pooling (ASPP) was trained to segment the FAZ from superficial capillary plexus (SCP) and deep capillary plexus (DCP) OCTA slabs. Second, a GoogLeNet-based convolutional neural network (CNN) classified segmented FAZ images into binary (normal vs. DR) and three-class (normal, non-proliferative DR [NPDR], proliferative DR [PDR]) categories to differentiate DR stages based on FAZ shape characteristics. For the classification task using the deep learning-generated segmented FAZ images as input, the data was split into 70% training, 10% validation, and 20% testing, with 5-fold cross-validation to mitigate overfitting. Data augmentation and Synthetic Minority Oversampling Technique (SMOTE) were applied to improve classification performance. Results: The final dataset comprised 253 OCTA scans (126 SCP, 127 DCP) from 161 eyes of 161 participants (one eye per participant), with 39 normal participants (24.2%), 78 patients with NPDR (48.4%), and 44 with PDR (27.3%). The mean age was 49.7 ± 11.8 years, and 54% were male. The FAZ segmentation network achieved a Dice similarity coefficient (DSC) of 97.5% across the dataset, achieving high precision even in challenging images. The classification model, using the deep learning generated segmented FAZ images as input, reached an area under the curve (AUC) of 100% for binary classification (normal vs. DR) and 87% for three-class classification (normal, NPDR, PDR) with oversampling. Conclusion: This system, with its potential for integrating into clinical workflows, offers a promising assistive tool for clinicians, which could enable earlier and more accurate diagnosis of diabetic retinopathy from OCTA images. Clinical trial number: Not applicable.

Cite

CITATION STYLE

APA

Saeidian, J., Riazi-Esfahani, H., Azimi, H., Farrokhpour, H., Momeni, A., Jamalitootakani, M., … Khalili pour, E. (2025). Automated FAZ segmentation and diabetic retinopathy classification using OCTA images. BMC Ophthalmology, 25(1). https://doi.org/10.1186/s12886-025-04473-2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free