AMAL: Meta Knowledge-Driven Few-Shot Adapter Learning

1Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

NLP has advanced greatly together with the proliferation of Transformer-based pre-trained language models. To adapt to a downstream task, the pre-trained language models need to be fine-tuned with a sufficient supply of annotated examples. In recent years, Adapter-based fine-tuning methods have expanded the applicability of pre-trained language models by substantially lowering the required amount of annotated examples. However, existing Adapter-based methods still fail to yield meaningful results in the few-shot regime where only a few annotated examples are provided. In this study, we present a meta-learning-driven low-rank adapter pooling method, called AMAL, for leveraging pre-trained language models even with just a few data points. We evaluate our method on five text classification benchmark datasets. The results show that AMAL significantly outperforms previous few-shot learning methods and achieves a new state-of-the-art.

Cite

CITATION STYLE

APA

Hong, S. K., & Jang, T. Y. (2022). AMAL: Meta Knowledge-Driven Few-Shot Adapter Learning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 10381–10389). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.709

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free