A cross-domain lifelong learning model for visual understanding

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the study of media machine perception on image and video, people expect the machine to have the ability of lifelong learning like human. This paper, starting from anthropomorphic media perception, researches the multi-media perception which is based on lifelong machine learning. An ideal lifelong machine learning system for visual understanding is expected to learn relevant tasks from one or more domains continuously. However, most existing lifelong learning algorithms do not focus on the domain shift among tasks. In this work, we propose a novel cross-domain lifelong learning model (CD-LLM) to address the domain shift problem on visual understanding. The main idea is to generate a low-dimensional common subspace which captures domain invariable properties by embedding Grassmann manifold into tasks subspaces. With the low-dimensional common subspace, tasks can be projected and then model learning is performed. Extensive experiments are conducted on competitive cross-domain dataset. The results show the effectiveness and efficiency of the proposed algorithm on competitive cross-domain visual tasks.

Cite

CITATION STYLE

APA

Qing, C., Huang, Z., & Xu, X. I. (2016). A cross-domain lifelong learning model for visual understanding. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9916 LNCS, pp. 438–448). Springer Verlag. https://doi.org/10.1007/978-3-319-48890-5_43

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free