Joint multi-view representation learning and image tagging

16Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Automatic image annotation is an important problem in several machine learning applications such as image search. Since there exists a semantic gap between low-level image features and high-level semantics, the description ability of image representation can largely affect annotation results. In fact, image representation learning and image tagging are two closely related tasks. A proper image representation can achieve better image annotation results, and image tags can be treated as guidance to learn more effective image representation. In this paper, we present an optimal predictive subspace learning method which jointly conducts multi-view representation learning and image tagging. The two tasks can promote each other and the annotation performance can be further improved. To make the subspace to be more compact and discriminative, both visual structure and semantic information are exploited during learning. Moreover, we introduce powerful predictors (SVM) for image tagging to achieve better annotation performance. Experiments on standard image annotation datasets demonstrate the advantages of our method over the existing image annotation methods.

Cite

CITATION STYLE

APA

Xue, Z., Li, G., & Huang, Q. (2016). Joint multi-view representation learning and image tagging. In 30th AAAI Conference on Artificial Intelligence, AAAI 2016 (pp. 1366–1372). AAAI press. https://doi.org/10.1609/aaai.v30i1.10147

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free