Unsupervised concept learning in text subspace for cross-media retrieval

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Subspace (i.e. image, text or latent subspace) learning is one of the essential parts in cross-media retrieval. And most of the existing methods deal with mapping different modalities to the latent subspace pre-defined by category labels. However, the labels need a lot of manual annotation, and the label concerned subspace may not be exact enough to represent the semantic information. In this paper, we propose a novel unsupervised concept learning approach in text subspace for cross-media retrieval, which can map images and texts to a conceptual text subspace via the neural networks trained by self-learned concept labels, therefore the well-established text subspace is more reasonable and practicable than pre-defined latent subspace. Experiments demonstrate that our proposed method not only outperforms the state-of-the-art unsupervised methods but achieves better performance than several supervised methods on two benchmark datasets.

Cite

CITATION STYLE

APA

Fan, M., Wang, W., Dong, P., Wang, R., & Li, G. (2018). Unsupervised concept learning in text subspace for cross-media retrieval. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10735 LNCS, pp. 505–514). Springer Verlag. https://doi.org/10.1007/978-3-319-77380-3_48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free