Joint dictionaries for zero-shot learning

13Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

A classic approach toward zero-shot learning (ZSL) is to map the input domain to a set of semantically meaningful attributes that could be used later on to classify unseen classes of data (e.g. visual data). In this paper, we propose to learn a visual feature dictionary that has semantically meaningful atoms. Such a dictionary is learned via joint dictionary learning for the visual domain and the attribute domain, while enforcing the same sparse coding for both dictionaries. Our novel attribute aware formulation provides an algorithmic solution to the domain shift/hubness problem in ZSL. Upon learning the joint dictionaries, images from unseen classes can be mapped into the attribute space by finding the attribute aware joint sparse representation using solely the visual data. We demonstrate that our approach provides superior or comparable performance to that of the state of the art on benchmark datasets.

Cite

CITATION STYLE

APA

Kolouri, S., Rostami, M., Owechko, Y., & Kim, K. (2018). Joint dictionaries for zero-shot learning. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 3431–3439). AAAI press. https://doi.org/10.1609/aaai.v32i1.11649

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free