Abstract
Recent advances in prompt tuning have proven effective as a new language modeling paradigm for various natural language understanding tasks. However, it is challenging to adapt the soft prompt embeddings to different domains or generalize to low-data settings when learning soft prompts itself is unstable, task-specific, and bias-prone. This paper proposes a principled learning framework—soft prompt construction (SPC)—to facilitate learning domain-adaptable soft prompts. Derived from the SPC framework is a simple loss that can plug into various models and tuning approaches to improve their cross-domain performance. We show SPC can improve upon SOTA for contextual query rewriting, summarization, and paraphrase detection by up to 5%, 19%, and 16%, respectively.
Cite
CITATION STYLE
Zhao, W., Gupta, A., Chung, T., & Huang, J. (2023). SPC: Soft Prompt Construction for Cross Domain Generalization. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 118–130). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.repl4nlp-1.10
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.