Four Models for Automatic Recognition of Left and Right Eye in Fundus Images

10Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Fundus image analysis is crucial for eye condition screening and diagnosis and consequently personalized health management in a long term. This paper targets at left and right eye recognition, a basic module for fundus image analysis. We study how to automatically assign left-eye/right-eye labels to fundus images of posterior pole. For this under-explored task, four models are developed. Two of them are based on optic disc localization, using extremely simple max intensity and more advanced Faster R-CNN, respectively. The other two models require no localization, but perform holistic image classification using classical Local Binary Patterns (LBP) features and fine-tuned ResNet-18, respectively. The four models are tested on a real-world set of 1,633 fundus images from 834 subjects. Fine-tuned ResNet-18 has the highest accuracy of 0.9847. Interestingly, the LBP based model, with the trick of left-right contrastive classification, performs closely to the deep model, with an accuracy of 0.9718.

Cite

CITATION STYLE

APA

Lai, X., Li, X., Qian, R., Ding, D., Wu, J., & Xu, J. (2019). Four Models for Automatic Recognition of Left and Right Eye in Fundus Images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11295 LNCS, pp. 507–517). Springer Verlag. https://doi.org/10.1007/978-3-030-05710-7_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free