Unsupervised Continual Learning via Pseudo Labels

1Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Continual learning aims to learn new tasks incrementally using less computation and memory resources instead of retraining the model from scratch whenever new task arrives. However, existing approaches are designed in supervised fashion assuming all data from new tasks have been manually annotated, which are not practical for many real-life applications. In this work, we propose to use pseudo label instead of the ground truth to make continual learning feasible in unsupervised mode. The pseudo labels of new data are obtained by applying global clustering algorithm and we propose to use the model updated from last incremental step as the feature extractor. Due to the scarcity of existing work, we introduce a new benchmark experimental protocol for unsupervised continual learning of image classification task under class-incremental setting where no class label is provided for each incremental learning step. Our method is evaluated on the CIFAR-100 and ImageNet (ILSVRC) datasets by incorporating the pseudo label with various existing supervised approaches and show promising results in unsupervised scenario.

Cite

CITATION STYLE

APA

He, J., & Zhu, F. (2022). Unsupervised Continual Learning via Pseudo Labels. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13418 LNAI, pp. 15–32). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-17587-9_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free