Rethinking Retinal Image Quality: Treating Quality Threshold as a Tunable Hyperparameter

2Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Assuming the robustness of a deep learning model to suboptimal images is a key consideration, we asked if there was any value in including training images of poor quality. In particular, should we treat the (quality) threshold at which a training image is either included or excluded as a tunable hyperparameter? To that end, we systematically examined the effect of including training images of varying quality on the test performance of a DL model in classifying the severity of diabetic retinopathy. We found that there was a unique combination of (categorical) quality labels or a Goldilocks (continuous) quality score that gave rise to optimal test performance on either high-quality or suboptimal images. The model trained exclusively on high-quality images yielded worse performance in all test scenarios than that trained on the optimally tuned training set which included images with some level of degradation.

Cite

CITATION STYLE

APA

Yii, F. S., Dutt, R., MacGillivray, T., Dhillon, B., Bernabeu, M., & Strang, N. (2022). Rethinking Retinal Image Quality: Treating Quality Threshold as a Tunable Hyperparameter. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13576 LNCS, pp. 73–83). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-16525-2_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free