Regularising Knowledge Transfer by Meta Functional Learning

2Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Machine learning classifiers' capability is largely dependent on the scale of available training data and limited by the model overfitting in data-scarce learning tasks. To address this problem, this work proposes a novel Meta Functional Learning (MFL) by meta-learning a generalisable functional model from data-rich tasks whilst simultaneously regularising knowledge transfer to data-scarce tasks. The MFL computes meta-knowledge on functional regularisation generalisable to different learning tasks by which functional training on limited labelled data promotes more discriminative functions to be learned. Moreover, we adopt an Iterative Update strategy on MFL (MFL-IU). This improves knowledge transfer regularisation from MFL by progressively learning the functional regularisation in knowledge transfer. Experiments on three Few-Shot Learning (FSL) benchmarks (miniImageNet, CIFAR-FS and CUB) show that meta functional learning for regularisation knowledge transfer can benefit improving FSL classifiers.

Cite

CITATION STYLE

APA

Li, P., Fu, Y., & Gong, S. (2021). Regularising Knowledge Transfer by Meta Functional Learning. In IJCAI International Joint Conference on Artificial Intelligence (pp. 2687–2693). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/370

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free