Progressive lifelong learning by sharing representations for few labeled data

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Lifelong Machine Learning (LML) has been receiving more and more attention in the past few years. It produces systems that are able to learn knowledge from consecutive tasks and refine the learned knowledge for a life time. In the optimization process of classical full-supervised LML systems, sufficient labeled data are required for extracting inter-task relationships before transferring. In order to leverage abundant unlabeled data and reduce the expenditure of labeling data, an progressive lifelong learning algorithm (PLLA) is proposed in this paper with unsupervised pre-training to learn shared representations that are more suitable as input to LML systems than the raw input data. Experiments show that the proposed PLLA is much more effective than many other LML methods when few labeled data is available.

Cite

CITATION STYLE

APA

Su, G., Xu, X., Chen, C., Cai, B., & Qing, C. (2018). Progressive lifelong learning by sharing representations for few labeled data. In Communications in Computer and Information Science (Vol. 819, pp. 411–418). Springer Verlag. https://doi.org/10.1007/978-981-10-8530-7_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free