Abstract
Zero-shot learning (ZSL) aims at recognizing data as an unseen category, using information learned from the training data of predefined (seen) labels or attributes. In this paper, we propose an effective learning model for solving ZSL, which focuses on relating image and semantic domains with classification guarantees. In particular, we introduce semantics-preserving locality embedding when associating the above cross-domain data. We show that our ZSL model can be extended from inductive and transductive ZSL settings, if unlabeled data of unseen categories are presented during training. In the experiments, we show that our proposed method would perform favorably against baseline and state-of-the-art approaches on multiple benchmark datasets.
Cite
CITATION STYLE
Tao, S. Y., Tsai, Y. H. H., Yeh, Y. R., & Wang, Y. C. F. (2017). Semantics-preserving locality embedding for zero-shot learning. In British Machine Vision Conference 2017, BMVC 2017. BMVA Press. https://doi.org/10.5244/c.31.3
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.