Self-focus deep embedding model for coarse-grained zero-shot classification

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Zero-shot learning (ZSL), i.e. classifying patterns where there is a lack of labeled training data, is a challenging yet important research topic. One of the most common ideas for ZSL is to map the data (e.g., images) and semantic attributes to the same embedding space. However, for coarse-grained classification tasks, the samples of each class tend to be unevenly distributed. This leads to the possibility of learned embedding function mapping the attributes to an inappropriate location, and hence limiting the classification performance. In this paper, we propose a novel regularized deep embedding model for ZSL in which a self-focus mechanism, is constructed to constrain the learning of the embedding function. During the training process, the distances of different dimensions in the embedding space will be focused conditioned on the class. Thereby, locations of the prototype mapped from the attributes can be adjusted according to the distribution of the samples for each class. Moreover, over-fitting of the embedding function to known classes will also be mitigated. A series of experiments on four commonly used zero-shot databases show that our proposed method can attain significant improvement in coarse-grained data sets.

Cite

CITATION STYLE

APA

Yang, G., Huang, K., Zhang, R., Goulermas, J. Y., & Hussain, A. (2020). Self-focus deep embedding model for coarse-grained zero-shot classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11691 LNAI, pp. 12–22). Springer. https://doi.org/10.1007/978-3-030-39431-8_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free