To tackle the problem of limited annotated data, semisupervised learning is attracting attention as an alternative to fully supervised models. Moreover, optimizing a multipletask model to learn "multiple contexts" can provide better generalizability compared to single-task models. We propose a novel semi-supervised multiple-task model leveraging selfsupervision and adversarial training-namely, self-supervised, semi-supervised, multi-context learning (S4MCL)-and apply it to two crucial medical imaging tasks, classification and segmentation. Our experiments on spine X-rays reveal that the S4MCL model significantly outperforms semisupervised single-task, semi-supervised multi-context, and fully-supervised single-task models, even with a 50% reduction of classification and segmentation labels.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Imran, A. A. Z., Huang, C., Tang, H., Fan, W., Xiao, Y., Hao, D., … Terzopoulos, D. (2020). Self-Supervised, Semi-Supervised, Multi-Context Learning for the Combined Classification and Segmentation of Medical Images. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 13815–13816). AAAI press.