Latent Weights Generating for Few Shot Learning Using Information Theory

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Few shot image classification aims at learning a classifier from limited labeled data. Generating the classification weights has been applied in many metalearning approaches for few shot image classification due to its simplicity and effectiveness. However, fixed classification weights for different query samples within one task might be sub-optimal, due to the few shot challenge, and it is difficult to generate the exact and universal classification weights for all the diverse query samples from very few training samples. In this work, we introduce latent weights generating using information theory (LWGIT) for few shot learning which addresses current issues by generating different classification weights for different query samples by letting each of query samples attends to the whole support set. The experiment results demonstrate the effectiveness of LWGIT, thereby contributing to exceed the performances of the existing state-of-the-art models.

Cite

CITATION STYLE

APA

Li, Z., & Ji, Y. (2020). Latent Weights Generating for Few Shot Learning Using Information Theory. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12249 LNCS, pp. 1003–1016). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58799-4_72

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free