The metric-based meta-learning is effective to solve few-shot problems. Generally, a metric model learns a task-agnostic embedding function, maps instances to a low-dimensional embedding space, then classifies unlabeled examples by similarity comparison. However, different classification tasks have individual discriminative characteristics, and previous approaches are constrained to use a single set of features for all possible tasks. In this work, we introduce a Context Adaptive Metric Model (CAMM), which has adaptive ability to extract key features and can be used for most metric models. Our extension consists of two parts: Context parameter module and Self-evaluation module. The context is interpreted as a task representation that modulates the behavior of feature extractor. CAMM fine-tunes context parameters via Self-evaluation module to generate task-specific embedding functions. We demonstrate that our approach is competitive with recent state-of-the-art systems, improves performance considerably (4%–6% relative) over baselines on mini-imagenet benchmark. Our code is publicly available at https://github.com/Jorewang/CAMM.
CITATION STYLE
Wang, Z., & Li, F. (2020). Context Adaptive Metric Model for Meta-learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12396 LNCS, pp. 393–405). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-61609-0_31
Mendeley helps you to discover research relevant for your work.