Multimodal retrieval by Text-segment biclustering

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We describe our approach to the ImageCLEFphoto 2007 task. The novelty of our method consists of biclustering image segments and annotation words. Given the query words, it is possible to select the image segment clusters that have strongest cooccurrence with the corresponding word clusters. These image segment clusters act as the selected segments relevant to a query. We rank text hits by our own tf.idf-based information retrieval system and image similarities by using a 20-dimensional vector describing the visual content of an image segment. Relevant image segments were selected by the biclustering procedure. Images were segmented by graph-based segmentation. We used neither query expansion nor relevance feedback; queries were generated automatically from the title and the description words. The later were weighted by 0.1. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Benczúr, A., Bíró, I., Brendel, M., Csalogány, K., Daróczy, B., & Siklósi, D. (2008). Multimodal retrieval by Text-segment biclustering. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5152 LNCS, pp. 518–521). Springer Verlag. https://doi.org/10.1007/978-3-540-85760-0_64

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free