Deep learning based multi-modal registration for retinal imaging

17Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The precise alignment of retina images from different modalities allows ophthalmologists not only to track morphological/pathological changes over time but also to combine different modalities to approach the diagnosis, prognostication, management and monitoring of a retinal disease. We propose an image registration algorithm to trace changes in the retina structure across modalities using vessel segmentation and automatic landmark detection. The segmentation of the vessels is done using a U-Net and the detection of the vessel junctions is achieved with Mask R-CNN. We evaluated the results of our approach using manual grading by expert readers. In the largest dataset (FA-to-SLO/OCT) containing 1130 pairs we achieve an average error rate of 13.12%. We compared our method with intensity based affine registration methods using original and vessel segmentation images.

Cite

CITATION STYLE

APA

Arikan, M., Sadeghipour, A., Gerendas, B., Told, R., & Schmidt-Erfurt, U. (2019). Deep learning based multi-modal registration for retinal imaging. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11797 LNCS, pp. 75–82). Springer. https://doi.org/10.1007/978-3-030-33850-3_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free