ADEPT: Adapter-based Efficient Prompt Tuning Approach for Language Models

5Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Fine-tuning large pre-trained models for downstream tasks can be really expensive. In the past, researchers have proposed various alternatives like adapter and prompt-based methods for tuning these large language models using minimal parameters. However, applying prompt-tuning for smaller language models has not been effective so far and not much work is done in pushing forward soft prompting for these smaller models. To improve the training efficiency of the language models and reduce the size of tuned parameters, we propose a novel Adapter-based Efficient Prompt Tuning approach (ADEPT). In this paper, we show that tuning the parameters of soft prompts with adapter modules while keeping the rest of the model frozen can be a promising method to optimize smaller language models for downstream tasks. Our method achieves up to 98% performance of full fine-tuning while using only 0.02% of total model parameters.

Cite

CITATION STYLE

APA

Shah, A., Thapa, S., Jain, A., & Huang, L. (2023). ADEPT: Adapter-based Efficient Prompt Tuning Approach for Language Models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 121–128). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.sustainlp-1.8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free