Associative Alignment for Few-Shot Image Classification

70Citations
Citations of this article
122Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Few-shot image classification aims at training a model from only a few examples for each of the “novel” classes. This paper proposes the idea of associative alignment for leveraging part of the base data by aligning the novel training instances to the closely related ones in the base training set. This expands the size of the effective novel training set by adding extra “related base” instances to the few novel ones, thereby allowing a constructive fine-tuning. We propose two associative alignment strategies: 1) a metric-learning loss for minimizing the distance between related base samples and the centroid of novel instances in the feature space, and 2) a conditional adversarial alignment loss based on the Wasserstein distance. Experiments on four standard datasets and three backbones demonstrate that combining our centroid-based alignment loss results in absolute accuracy improvements of 4.4%, 1.2%, and 6.2% in 5-shot learning over the state of the art for object recognition, fine-grained classification, and cross-domain adaptation, respectively.

Cite

CITATION STYLE

APA

Afrasiyabi, A., Lalonde, J. F., & Gagné, C. (2020). Associative Alignment for Few-Shot Image Classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12350 LNCS, pp. 18–35). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58558-7_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free