Measuring multi-modality similarities via subspace learning for cross-media retrieval

18Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Cross-media retrieval is an interesting research problem, which seeks to breakthrough the limitation of modality so that users can query multimedia objects by examples of different modalities. In order to cross-media retrieve, the problem of similarity measure between media objects with heterogeneous low-level features needs to be solved. This paper proposes a novel approach to learn both intra- and inter-media correlations among multi-modality feature spaces, and construct MLE semantic subspace containing multimedia objects of different modalities. Meanwhile, relevance feedback strategies are developed to enhance the efficiency of cross-media retrieval from both short- and long-term perspectives. Experiments show that the result of our approach is encouraging and the performance is effective. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Zhang, H., & Weng, J. (2006). Measuring multi-modality similarities via subspace learning for cross-media retrieval. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4261 LNCS, pp. 979–988). Springer Verlag. https://doi.org/10.1007/11922162_111

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free