Dual Context-Guided Continuous Prompt Tuning for Few-Shot Learning

3Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.

Abstract

Prompt-based paradigm has shown its competitive performance in many NLP tasks. However, its success heavily depends on prompt design, and the effectiveness varies upon the model and training data. In this paper, we propose a novel dual context-guided continuous prompt (DCCP) tuning method. To explore the rich contextual information in language structure and close the gap between discrete prompt tuning and continuous prompt tuning, DCCP introduces two auxiliary training objectives and constructs input in a pair-wise fashion. Experimental results demonstrate that our method is applicable to many NLP tasks, and can often outperform existing prompt tuning methods by a large margin in the few-shot setting.

Cite

CITATION STYLE

APA

Zhou, J., Tian, L., Yu, H., Zhou, X., Su, H., & Zhou, J. (2022). Dual Context-Guided Continuous Prompt Tuning for Few-Shot Learning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 79–84). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free