CONTROL PREFIXES for Parameter-Efficient Text Generation

22Citations
Citations of this article
60Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Prefix-tuning is a parameter-efficient and powerful technique for adapting a pre-trained language model to a downstream application. However, it uses the same dataset-level tuned set of parameters for all examples in the dataset. We extend the framework with a dynamic method, CONTROL PREFIXES, which allows for the effective inclusion of input-dependent information, thereby demonstrating how prefix-tuning can be used for controlled text generation tasks. The method incorporates attribute-level learnable representations into different layers of a pre-trained Transformer, enabling the generated text to be guided in a particular direction. We provide a systematic evaluation of the technique and apply it to five datasets from the GEM benchmark for natural language generation (NLG). Using only 0.1-2% additional trainable parameters, we show CONTROL PREFIXES can even outperform full fine-tuning methods, and present state-of-the-art results on several data-to-text datasets, including WebNLG. We also examine the common case where input-dependent information is unavailable at test time and show CONTROL PREFIXES can excel in this setting also.

Cite

CITATION STYLE

APA

Clive, J., Cao, K., & Rei, M. (2022). CONTROL PREFIXES for Parameter-Efficient Text Generation. In GEM 2022 - 2nd Workshop on Natural Language Generation, Evaluation, and Metrics, Proceedings of the Workshop (pp. 363–382). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.gem-1.31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free