Multimedia retrieval based on non-linear graph-based fusion and partial least squares regression

9Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Heterogeneous sources of information, such as images, videos, text and metadata are often used to describe different or complementary views of the same multimedia object, especially in the online news domain and in large annotated image collections. The retrieval of multimedia objects, given a multimodal query, requires the combination of several sources of information in an efficient and scalable way. Towards this direction, we provide a novel unsupervised framework for multimodal fusion of visual and textual similarities, which are based on visual features, visual concepts and textual metadata, integrating non-linear graph-based fusion and Partial Least Squares Regression. The fusion strategy is based on the construction of a multimodal contextual similarity matrix and the non-linear combination of relevance scores from query-based similarity vectors. Our framework can employ more than two modalities and high-level information, without increase in memory complexity, when compared to state-of-the-art baseline methods. The experimental comparison is done in three public multimedia collections in the multimedia retrieval task. The results have shown that the proposed method outperforms the baseline methods, in terms of Mean Average Precision and Precision@20.

Cite

CITATION STYLE

APA

Gialampoukidis, I., Moumtzidou, A., Liparas, D., Tsikrika, T., Vrochidis, S., & Kompatsiaris, I. (2017). Multimedia retrieval based on non-linear graph-based fusion and partial least squares regression. Multimedia Tools and Applications, 76(21), 22383–22403. https://doi.org/10.1007/s11042-017-4797-4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free