Additional layers to the U-Net architecture leads to additional parameters and network complexity. The Visual Geometry Group (VGG) architecture with 16 backbones can overcome the problem with small convolutions. Dense Connected (DenseNet) can be used to avoid excessive feature learning in VGG by directly connecting each layer using input from the previous feature map. Adding a Dropout layer can protect DenseNet from Overfitting problems. This study proposes a VG-DropDNet architecture that combines VGG, DenseNet, and U-Net with a dropout layer in blood vessels retinal segmentation. VG-DropDNet is applied to Digital Retina Image for Vessel Extraction (DRIVE) and Retina Structured Analysis (STARE) datasets. The results on DRIVE give great accuracy of 95.36%, sensitivity of 79.74% and specificity of 97.61%. The F1-score on DRIVE of 0.8144 indicates that VG-DropDNet has great precision and recall. The IoU result is 68.70. It concludes that the resulting image of VG-DropDNet has a great resemblance to its ground truth. The results on STARE are excellent for accuracy of 98.56%, sensitivity of 91.24%, specificity of 92.99% and IoU of 86.90%. The results of the VGG-DropDNet on STARE show that the proposed method is excellent and robust for blood vessels retinal segmentation. The Cohen's Kappa coefficient obtained by VG-DropDNet at DRIVe is 0.8386 and at STARE is 0.98, it explains that the VG-DropDNet results are consistent and precise in both datasets. The results on various datasets indicate that VG-DropDnet is effective, robust and stable in retinal image blood vessel segmentation.
CITATION STYLE
Desiani, A., Erwin, Suprihatin, B., Efriliyanti, F., Arhami, M., & Setyaningsih, E. (2022). VG-DropDNet a Robust Architecture for Blood Vessels Segmentation on Retinal Image. IEEE Access, 10, 92067–92083. https://doi.org/10.1109/ACCESS.2022.3202890
Mendeley helps you to discover research relevant for your work.