Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts

11Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Prompt tuning is a parameter-efficient tuning (PETuning) method for utilizing pre-trained models (PTMs) that simply prepends a soft prompt to the input and only optimizes the prompt to adapt PTMs to downstream tasks. Although it is parameter- and deployment-efficient, its performance still lags behind other state-of-the-art PETuning methods. Besides, the training cost of prompt tuning is not significantly reduced due to the back-propagation through the entire model. Through empirical analyses, we shed some light on the lagging performance of prompt tuning and recognize a trade-off between the propagation distance from label signals to the inserted prompt and the influence of the prompt on model outputs. Further, we present Late Prompt Tuning (LPT) that inserts a late prompt into an intermediate layer of the PTM instead of the input layer or all layers. The late prompt is obtained by a neural prompt generator conditioned on the hidden states before the prompt insertion layer and therefore is instance-dependent. Through extensive experimental results across various tasks and PTMs, we show that LPT can achieve competitive performance to full model tuning and other PETuning methods under both full-data and few-shot scenarios while possessing faster training speed and lower memory cost.

Cite

CITATION STYLE

APA

Liu, X., Sun, T., Huang, X., & Qiu, X. (2022). Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 1325–1338). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.277

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free