A novel cross modal hashing algorithm based on multi-modal deep learning

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the popularity of multi-modal data on Web, cross media retrieval has become a hot research topic. Existing cross modal hash methods assume that there is a latent space shared by multi-modal features, and embed the heterogeneous data into a joint abstraction space by linear projections. However, these approaches are sensitive to the noise of data, and unable to make use of unlabelled data and multi-modal data with missing values in the real-world applications. To address these challenges, in this paper, we propose a novel Multi-modal Deep Learning based Hashing (MDLH) algorithm. In particular, MDLH adopts deep neural network to encode heterogeneous features into a compact common representation and learn the hash functions based on the common representation. The parameters of the whole model are fine-tuned in supervised training stage. Experiments on two standard datasets show that our method achieves more effective results than other methods in cross modal retrieval.

Cite

CITATION STYLE

APA

Qu, W., Wang, D., Feng, S., Zhang, Y., & Yu, G. (2015). A novel cross modal hashing algorithm based on multi-modal deep learning. In Communications in Computer and Information Science (Vol. 568, pp. 156–167). Springer Verlag. https://doi.org/10.1007/978-981-10-0080-5_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free