Retinal image understanding emerges from self-supervised multimodal reconstruction

31Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The successful application of deep learning-based methodologies is conditioned by the availability of sufficient annotated data, which is usually critical in medical applications. This has motivated the proposal of several approaches aiming to complement the training with reconstruction tasks over unlabeled input data, complementary broad labels, augmented datasets or data from other domains. In this work, we explore the use of reconstruction tasks over multiple medical imaging modalities as a more informative self-supervised approach. Experiments are conducted on multimodal reconstruction of retinal angiography from retinography. The results demonstrate that the detection of relevant domain-specific patterns emerges from this self-supervised setting.

Cite

CITATION STYLE

APA

Hervella, Á. S., Rouco, J., Novo, J., & Ortega, M. (2018). Retinal image understanding emerges from self-supervised multimodal reconstruction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11070 LNCS, pp. 321–328). Springer Verlag. https://doi.org/10.1007/978-3-030-00928-1_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free