Self-Supervised, Semi-Supervised, Multi-Context Learning for the Combined Classification and Segmentation of Medical Images

5Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

To tackle the problem of limited annotated data, semisupervised learning is attracting attention as an alternative to fully supervised models. Moreover, optimizing a multipletask model to learn "multiple contexts" can provide better generalizability compared to single-task models. We propose a novel semi-supervised multiple-task model leveraging selfsupervision and adversarial training-namely, self-supervised, semi-supervised, multi-context learning (S4MCL)-and apply it to two crucial medical imaging tasks, classification and segmentation. Our experiments on spine X-rays reveal that the S4MCL model significantly outperforms semisupervised single-task, semi-supervised multi-context, and fully-supervised single-task models, even with a 50% reduction of classification and segmentation labels.

Cite

CITATION STYLE

APA

Imran, A. A. Z., Huang, C., Tang, H., Fan, W., Xiao, Y., Hao, D., … Terzopoulos, D. (2020). Self-Supervised, Semi-Supervised, Multi-Context Learning for the Combined Classification and Segmentation of Medical Images. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 13815–13816). AAAI press.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free