Knowledge-guided pretext learning for utero-placental interface detection

4Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Modern machine learning systems, such as convolutional neural networks rely on a rich collection of training data to learn discriminative representations. In many medical imaging applications, unfortunately, collecting a large set of well-annotated data is prohibitively expensive. To overcome data shortage and facilitate representation learning, we develop Knowledge-guided Pretext Learning (KPL) that learns anatomy-related image representations in a pretext task under the guidance of knowledge from the downstream target task. In the context of utero-placental interface detection in placental ultrasound, we find that KPL substantially improves the quality of the learned representations without consuming data from external sources such as ImageNet. It outperforms the widely adopted supervised pre-training and self-supervised learning approaches across model capacities and dataset scales. Our results suggest that pretext learning is a promising direction for representation learning in medical image analysis, especially in the small data regime.

Cite

CITATION STYLE

APA

Qi, H., Collins, S., & Noble, J. A. (2020). Knowledge-guided pretext learning for utero-placental interface detection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12261 LNCS, pp. 582–593). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59710-8_57

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free