Aligning visual prototypes with BERT embeddings for few-shot learning

15Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Few-shot learning (FSL) is the task of learning to recognize previously unseen categories of images from a small number of training examples. This is a challenging task, as the available examples may not be enough to unambiguously determine which visual features are most characteristic of the considered categories. To alleviate this issue, we propose a method that additionally takes into account the names of the image classes. While the use of class names has already been explored in previous work, our approach differs in two key aspects. First, while previous work has aimed to directly predict visual prototypes from word embeddings, we found that better results can be obtained by treating visual and text-based prototypes separately. Second, we propose a simple strategy for learning class name embeddings using the BERT language model, which we found to substantially outperform the GloVe vectors that were used in previous work. We furthermore propose a strategy for dealing with the high dimensionality of these vectors, inspired by models for aligning cross-lingual word embeddings. We provide experiments on miniImageNet, CUB and tieredImageNet, showing that our approach consistently improves the state-of-the-art in metric-based FSL.

Cite

CITATION STYLE

APA

Yan, K., Bouraoui, Z., Wang, P., Jameel, S., & Schockaert, S. (2021). Aligning visual prototypes with BERT embeddings for few-shot learning. In ICMR 2021 - Proceedings of the 2021 International Conference on Multimedia Retrieval (pp. 367–375). Association for Computing Machinery, Inc. https://doi.org/10.1145/3460426.3463641

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free