Cross-modal subspace learning with scheduled adaptive margin constraints

13Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Cross-modal embeddings, between textual and visual modalities, aim to organise multimodal instances by their semantic correlations. State-of-the-art approaches use maximum-margin methods, based on the hinge-loss, to enforce a constant margin m, to separate projections of multimodal instances from different categories. In this paper, we propose a novel scheduled adaptive maximum-margin (SAM) formulation that infers triplet-specific constraints during training, therefore organising instances by adaptively enforcing inter-category and inter-modality correlations. This is supported by a scheduled adaptive margin function, that is smoothly activated, replacing a static margin by an adaptively inferred one reflecting triplet-specific semantic correlations while accounting for the incremental learning behaviour of neural networks to enforce category cluster formation and enforcement. Experiments on widely used datasets show that our model improved upon state-of-the-art approaches, by achieving a relative improvement of up to ≈ 12.5% over the second best method, thus confirming the effectiveness of our scheduled adaptive margin formulation.

Cite

CITATION STYLE

APA

Semedo, D., & Magalhães, J. (2019). Cross-modal subspace learning with scheduled adaptive margin constraints. In MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia (pp. 75–83). Association for Computing Machinery, Inc. https://doi.org/10.1145/3343031.3351030

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free