Semi-supervised Learning by Disentangling and Self-ensembling over Stochastic Latent Space

13Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The success of deep learning in medical imaging is mostly achieved at the cost of a large labeled data set. Semi-supervised learning (SSL) provides a promising solution by leveraging the structure of unlabeled data to improve learning from a small set of labeled data. Self-ensembling is a simple approach used in SSL to encourage consensus among ensemble predictions of unknown labels, improving generalization of the model by making it more insensitive to the latent space. Currently, such an ensemble is obtained by randomization such as dropout regularization and random data augmentation. In this work, we hypothesize – from the generalization perspective – that self-ensembling can be improved by exploiting the stochasticity of a disentangled latent space. To this end, we present a stacked SSL model that utilizes unsupervised disentangled representation learning as the stochastic embedding for self-ensembling. We evaluate the presented model for multi-label classification using chest X-ray images, demonstrating its improved performance over related SSL models as well as the interpretability of its disentangled representations.

Cite

CITATION STYLE

APA

Gyawali, P. K., Li, Z., Ghimire, S., & Wang, L. (2019). Semi-supervised Learning by Disentangling and Self-ensembling over Stochastic Latent Space. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11769 LNCS, pp. 766–774). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-32226-7_85

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free