While human intelligence can easily recognize some characteristics of classes with one or few examples, learning from few examples is a challenging task in machine learning. Recently emerging deep learning generally requires hundreds of thousands of samples to achieve generalization ability. Despite recent advances in deep learning, it is not easy to generalize new classes with little supervision. Few-shot learning (FSL) aims to learn how to recognize new classes with few examples per class. However, learning with few examples makes the model difficult to generalize and is susceptible to overfitting. To overcome the difficulty, data augmentation techniques have been applied to FSL. It is well-known that existing data augmentation approaches rely heavily on human experts with prior knowledge to find effective augmentation strategies manually. In this work, we propose an efficient data augmentation network, called EDANet, to automatically select the most effective augmentation approaches to achieve optimal performance of FSL without human intervention. Our method overcomes the disadvantages of relying on domain knowledge and requiring expensive labor to design data augmentation rules manually. We demonstrate the proposed approach on widely used FSL benchmarks (Omniglot and mini-ImageNet). The experimental results using three popular FSL networks indicate that the proposed approach improves performance over existing baselines through an optimal combination of candidate augmentation strategies.
CITATION STYLE
Cho, W., & Kim, E. (2022). Improving Augmentation Efficiency for Few-Shot Learning. IEEE Access, 10, 17697–17706. https://doi.org/10.1109/ACCESS.2022.3151057
Mendeley helps you to discover research relevant for your work.