Abstract
In order to solve the inefficient parameter update and storage issues of fine-tuning in Natural Language Generation (NLG) tasks, prompt-tuning methods have emerged as lightweight alternatives. Furthermore, efforts to reduce the gap between pre-training and fine-tuning have shown successful results in low-resource settings. As large Pre-trained Language Models (PLMs) for Program and Language Generation (PLG) tasks are constantly being developed, prompt tuning methods are necessary for the tasks. However, due to the gap between pretraining and fine-tuning different from PLMs for natural language, a prompt tuning method that reflects the traits of PLM for program language is needed. In this paper, we propose a Task-Agnostic prompt tuning method for the PLG tasks, CodePrompt, that combines Input-Dependent Prompt Template (to bridge the gap between pre-training and fine-tuning of PLMs for program and language) and Corpus-Specific Prefix Tuning (to update the parameters of PLMs for program and language efficiently). Also, we propose a method to provide richer prefix word information for limited prefix lengths. We prove that our method is effective in three PLG tasks, not only in the full-data setting but also in the low-resource setting and cross-domain setting.
Cite
CITATION STYLE
Choi, Y. S., & Lee, J. H. (2023). CodePrompt: Task-Agnostic Prefix Tuning for Program and Language Generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 5282–5297). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.325
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.