Learning Modality-Invariant Latent Representations for Generalized Zero-shot Learning

33Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently, feature generating methods have been successfully applied to zero-shot learning (ZSL). However, most previous approaches only generate visual representations for zero-shot recognition. In fact, typical ZSL is a classic multi-modal learning protocol which consists of a visual space and a semantic space. In this paper, therefore, we present a new method which can simultaneously generate both visual representations and semantic representations so that the essential multi-modal information associated with unseen classes can be captured. Specifically, we address the most challenging issue in such a paradigm, i.e., how to handle the domain shift and thus guarantee that the learned representations are modality-invariant. To this end, we propose two strategies: 1) leveraging the mutual information between the latent visual representations and the semantic representations; 2) maximizing the entropy of the joint distribution of the two latent representations. By leveraging the two strategies, we argue that the two modalities can be well aligned. At last, extensive experiments on five widely used datasets verify that the proposed method is able to significantly outperform previous the state-of-the-arts.

Cite

CITATION STYLE

APA

Li, J., Jing, M., Zhu, L., Ding, Z., Lu, K., & Yang, Y. (2020). Learning Modality-Invariant Latent Representations for Generalized Zero-shot Learning. In MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia (pp. 1348–1356). Association for Computing Machinery, Inc. https://doi.org/10.1145/3394171.3413503

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free