The successful application of deep learning-based methodologies is conditioned by the availability of sufficient annotated data, which is usually critical in medical applications. This has motivated the proposal of several approaches aiming to complement the training with reconstruction tasks over unlabeled input data, complementary broad labels, augmented datasets or data from other domains. In this work, we explore the use of reconstruction tasks over multiple medical imaging modalities as a more informative self-supervised approach. Experiments are conducted on multimodal reconstruction of retinal angiography from retinography. The results demonstrate that the detection of relevant domain-specific patterns emerges from this self-supervised setting.
CITATION STYLE
Hervella, Á. S., Rouco, J., Novo, J., & Ortega, M. (2018). Retinal image understanding emerges from self-supervised multimodal reconstruction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11070 LNCS, pp. 321–328). Springer Verlag. https://doi.org/10.1007/978-3-030-00928-1_37
Mendeley helps you to discover research relevant for your work.