Re-ranking by multi-modal relevance feedback for content-based social image retrieval

8Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the recent rapid growth of social image hosting websites, it is becoming increasingly easy to construct a large database of tagged images. In this paper, we investigate whether and how social tags can be used for improving content-based image search results, which has not been well investigated in existing work. We propose a multi-modal relevance feedback scheme and a supervised re-ranking approach by using social tags. Our multi-modal scheme utilizes both image and social tag relevance feedback instances. The approach propagates visual and textual information and multi-modal relevance feedback information on an image-tag relationship graph with a mutual reinforcement process. We conduct experiments showing that our approach can successfully use social tags in the re-ranking of content-based social image search results and perform better than other approaches. Additional experiment shows that our multi-modal relevance feedback scheme significantly improves performance compared with the traditional single-modal scheme. © 2012 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Li, J., Ma, Q., Asano, Y., & Yoshikawa, M. (2012). Re-ranking by multi-modal relevance feedback for content-based social image retrieval. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7235 LNCS, pp. 399–410). https://doi.org/10.1007/978-3-642-29253-8_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free