Zero-shot cross-media retrieval with external knowledge

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Cross-media retrieval has drawn much attention recently, by which users can retrieve results across different media types like image and text. The existing methods mainly focus on the condition where the training data covers all the categories in the testing data. However, the number of categories is infinite in real world and it is impossible to include all categories in the training data. Due to the limitation of scalability, the performance of existing methods will be not effective when retrieving with unseen categories. For addressing the issues of both “heterogeneity gap” and the gap of seen and unseen categories, this paper proposes a new approach to model both multimedia and external knowledge information. The common semantic representations are generated jointly by media features and category weight vectors which are learned by utilizing online encyclopedias. Experiment on two widely-used datasets shows the effectiveness of our approach for zero-shot cross-media retrieval.

Cite

CITATION STYLE

APA

Chi, J., Huang, X., & Peng, Y. (2018). Zero-shot cross-media retrieval with external knowledge. In Communications in Computer and Information Science (Vol. 819, pp. 200–211). Springer Verlag. https://doi.org/10.1007/978-981-10-8530-7_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free