Semi-supervised Multi-task Learning with Chest X-Ray Images

17Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Discriminative models that require full supervision are inefficacious in the medical imaging domain when large labeled datasets are unavailable. By contrast, generative modeling—i.e., learning data generation and classification—facilitates semi-supervised training with limited labeled data. Moreover, generative modeling can be advantageous in accomplishing multiple objectives for better generalization. We propose a novel multi-task learning model for jointly learning a classifier and a segmentor, from chest X-ray images, through semi-supervised learning. In addition, we propose a new loss function that combines absolute KL divergence with Tversky loss (KLTV) to yield faster convergence and better segmentation performance. Based on our experimental results using a novel segmentation model, an Adversarial Pyramid Progressive Attention U-Net (APPAU-Net), we hypothesize that KLTV can be more effective for generalizing multi-tasking models while being competitive in segmentation-only tasks.

Cite

CITATION STYLE

APA

Imran, A. A. Z., & Terzopoulos, D. (2019). Semi-supervised Multi-task Learning with Chest X-Ray Images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11861 LNCS, pp. 151–159). Springer. https://doi.org/10.1007/978-3-030-32692-0_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free