Controllable Natural Language Generation with Contrastive Prefixes

57Citations
Citations of this article
90Readers
Mendeley users who have this article in their library.

Abstract

To guide the generation of large pretrained language models (LM), previous work has focused on directly fine-tuning the language model or utilizing an attribute discriminator. In this work, we propose a novel lightweight framework for controllable GPT2 (Radford et al., 2019) generation, which utilizes a set of small attribute-specific vectors, called prefixes (Li and Liang, 2021), to steer natural language generation. Different from Li and Liang (2021), where each prefix is trained independently, we take the relationship among prefixes into consideration and train multiple prefixes simultaneously, as illustrated in Figure 1. We propose a novel supervised method and also an unsupervised method to train the prefixes for single-aspect control while the combination of these two methods can achieve multi-aspect control. Experimental results on both single-aspect and multi-aspect control show that our methods can guide generation towards the desired attributes while keeping high linguistic quality.

Cite

CITATION STYLE

APA

Qian, J., Dong, L., Shen, Y., Wei, F., & Chen, W. (2022). Controllable Natural Language Generation with Contrastive Prefixes. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 2912–2924). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.229

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free