In this paper, we propose a multi-task learning approach for cross-modal image-text retrieval. First, a correlation network is proposed for relation recognition task, which helps learn the complicated relations and common information of different modalities. Then, we propose a correspondence cross-modal autoencoder for cross-modal input reconstruction task, which helps correlate the hidden representations of two uni-modal autoencoders. In addition, to further improve the performance of cross-modal retrieval, two regularization terms (variance and consistency constraints) are introduced to the cross-modal embeddings such that the learned common information has large variance and is modality invariant. Finally, to enable large-scale cross-modal similarity search, a flexible binary transform network is designed to convert the text and image embeddings into binary codes. Extensive experiments on two benchmark datasets demonstrate that our model has robust superiority over the compared strong baseline methods. Source code is available at https://github.com/daerv/DAEVR.
CITATION STYLE
Luo, J., Shen, Y., Ao, X., Zhao, Z., & Yang, M. (2019). Cross-modal image-text retrieval with multitask learning. In International Conference on Information and Knowledge Management, Proceedings (pp. 2309–2312). Association for Computing Machinery. https://doi.org/10.1145/3357384.3358104
Mendeley helps you to discover research relevant for your work.