Y-Net: A Spatiospectral Dual-Encoder Network for Medical Image Segmentation

19Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Automated segmentation of retinal optical coherence tomography (OCT) images has become an important recent direction in machine learning for medical applications. We hypothesize that the anatomic structure of layers and their high-frequency variation in OCT images make retinal OCT a fitting choice for extracting spectral domain features and combining them with spatial domain features. In this work, we present Y-Net, an architecture that combines the frequency domain features with the image domain to improve the segmentation performance of OCT images. The results of this work demonstrate that the introduction of two branches, one for spectral and one for spatial domain features, brings very significant improvement in fluid segmentation performance and allows outperformance as compared to the well-known U-Net model. Our improvement was 13 % on the fluid segmentation dice score and 1.9 % on the average dice score. Finally, removing selected frequency ranges in the spectral domain demonstrates the impact of these features on the fluid segmentation outperformance. Code: github.com/azadef/ynet

Cite

CITATION STYLE

APA

Farshad, A., Yeganeh, Y., Gehlbach, P., & Navab, N. (2022). Y-Net: A Spatiospectral Dual-Encoder Network for Medical Image Segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13432 LNCS, pp. 582–592). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-16434-7_56

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free