Multi-scale Comparison Network for Few-Shot Learning

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Few-shot learning, which learns from a small number of samples, is an emerging field in multimedia. Through systematically exploring influences of scale information, including multi-scale feature extraction, multi-scale comparison and increased parameters brought by multiple scales, in this paper, we present a novel end-to-end model called Multi-scale Comparison Network (MSCN) for few-shot learning. The proposed MSCN uses different scale convolutions for comparison to solve the problem of excessive gaps between target sizes in the images during few-shot learning. It first uses a 4-layer encoder to encode support and testing samples to obtain their feature maps. After deep splicing these feature maps, the proposed MSCN further uses a comparator comprising two layers of multi-scale comparative modules and two fully connected layers to derive the similarity between support and testing samples. Experimental results on two benchmark datasets including Omniglot and (formula presented)Imagenet shows the effectiveness of the proposed MSCN, which has averagely 2% improvement on (formula presented)Imagenet in all experimental results compared with the recent Relation Network.

Cite

CITATION STYLE

APA

Chen, P., Yuan, M., & Lu, T. (2020). Multi-scale Comparison Network for Few-Shot Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11962 LNCS, pp. 3–13). Springer. https://doi.org/10.1007/978-3-030-37734-2_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free