Learning user queries in multimodal dissimilarity spaces

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Résumé. Different strategies to learn user semantic queries from dissimilarity representations of audio-visual content are presented, When dealing with large corpora of videos documents, using a feature representation requires the on-line computation of distances between all documents and a query. Hence, a dissimilarity representation may be preferred because its offline computation speeds up the retrieval process. We show how distances related to visual and audio video features can directly be used to learn complex concepts from a set of positive and negative examples provided by the user. Based on the idea of dissimilarity spaces, we derive three algorithms to fuse modalities and therefore to enhance the precision of retrieval results, The evaluation of our technique is performed on artificial data and on the annotated TRECVID corpus. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Bruno, E., Moenne-Loccoz, N., & Marchand-Maillet, S. (2006). Learning user queries in multimodal dissimilarity spaces. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3877 LNCS, pp. 168–179). https://doi.org/10.1007/11670834_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free