Efficient Deep Feature Calibration for Cross-Modal Joint Embedding Learning

2Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper introduces a two-phase deep feature calibration framework for efficient learning of semantics enhanced text-image cross-modal joint embedding, which clearly separates the deep feature calibration in data preprocessing from training the joint embedding model. We use the Recipe1M dataset for the technical description and empirical validation. In preprocessing, we perform deep feature calibration by combining deep feature engineering with semantic context features derived from raw text-image input data. We leverage LSTM to identify key terms, NLP methods to produce ranking scores for key terms before generating the key term feature. We leverage wideResNet50 to extract and encode the image category semantics to help semantic alignment of the learned recipe and image embeddings in the joint latent space. In joint embedding learning, we perform deep feature calibration by optimizing the batch-hard triplet loss function with soft-margin and double negative sampling, also utilizing the category-based alignment loss and discriminator-based alignment loss. Extensive experiments demonstrate that our SEJE approach with the deep feature calibration significantly outperforms the state-of-the-art approaches.

Cite

CITATION STYLE

APA

Xie, Z., Liu, L., Li, L., & Zhong, L. (2021). Efficient Deep Feature Calibration for Cross-Modal Joint Embedding Learning. In ICMI 2021 - Proceedings of the 2021 International Conference on Multimodal Interaction (pp. 43–51). Association for Computing Machinery, Inc. https://doi.org/10.1145/3462244.3479892

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free