How to Learn from Unlabeled Volume Data: Self-supervised 3D Context Feature Learning

17Citations
Citations of this article
56Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The vast majority of 3D medical images lacks detailed image-based expert annotations. The ongoing advances of deep convolutional neural networks clearly demonstrate the benefit of supervised learning to successfully extract relevant anatomical information and aid image-based analysis and interventions, but it heavily relies on labeled data. Self-supervised learning, that requires no expert labels, provides an appealing way to discover data-inherent patterns and leverage anatomical information freely available from medical images themselves. In this work, we propose a new approach to train effective convolutional feature extractors based on a new concept of image-intrinsic spatial offset relations with an auxiliary heatmap regression loss. The learned features successfully capture semantic, anatomical information and enable state-of-the-art accuracy for a k-NN based one-shot segmentation task without any subsequent fine-tuning.

Cite

CITATION STYLE

APA

Blendowski, M., Nickisch, H., & Heinrich, M. P. (2019). How to Learn from Unlabeled Volume Data: Self-supervised 3D Context Feature Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11769 LNCS, pp. 649–657). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-32226-7_72

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free