Multimodal-based supervised learning for image search reranking

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The aim of image search reranking is to rerank the images obtained by a conventional text-based image search engine to improve the search precision, diversity and so on. Current image reranking methods are often based on a single modality. However, it is hard to find a general modality which can work well for all kinds of queries. This paper proposes a multimodal-based supervised learning for image search reranking. First, for different modalities, different similarity graphs are constructed and different approaches are utilized to calculate the similarity between images on the graph. Exploiting the similarity graphs and the initial list, we integrate the multiple modality into query-independent reranking features, namely PageRank Pseudo Relevance Feedback, Density Feature, Initial Ranking Score Feature, and then fuse them into a 19-dimensional feature vector for each image. After that, the supervised method is employed to learn the weight of each reranking feature. The experiments constructed on the MSRA-MM Dataset demonstrate the improvement in robust and effectiveness of the proposed method.

Cite

CITATION STYLE

APA

Zhao, S., Ma, J., & Cui, C. (2015). Multimodal-based supervised learning for image search reranking. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9098, pp. 135–147). Springer Verlag. https://doi.org/10.1007/978-3-319-21042-1_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free