Norm-guided Adaptive Visual Embedding for Zero-Shot Sketch-Based Image Retrieval

28Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Zero-shot sketch-based image retrieval (ZS-SBIR), which aims to retrieve photos with sketches under the zero-shot scenario, has shown extraordinary talents in real-world applications. Most existing methods leverage language models to generate class-prototypes and use them to arrange the locations of all categories in the common space for photos and sketches. Although great progress has been made, few of them consider whether such pre-defined prototypes are necessary for ZS-SBIR, where locations of unseen class samples in the embedding space are actually determined by visual appearance and a visual embedding actually performs better. To this end, we propose a novel Norm-guided Adaptive Visual Embedding (NAVE) model, for adaptively building the common space based on visual similarity instead of language-based pre-defined prototypes. To further enhance the representation quality of unseen classes for both photo and sketch modality, modality norm discrepancy and noisy label regularizer are jointly employed to measure and repair the modality bias of the learned common embedding. Experiments on two challenging datasets demonstrate the superiority of our NAVE over state-of-the-art competitors.

Cite

CITATION STYLE

APA

Wang, W., Shi, Y., Chen, S., Peng, Q., Zheng, F., & You, X. (2021). Norm-guided Adaptive Visual Embedding for Zero-Shot Sketch-Based Image Retrieval. In IJCAI International Joint Conference on Artificial Intelligence (pp. 1106–1112). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/153

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free