Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models

36Citations
Citations of this article
46Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We explore the idea of compressing the prompts used to condition language models, and show that compressed prompts can retain a substantive amount of information about the original prompt. For severely compressed prompts, while fine-grained information is lost, abstract information and general sentiments can be retained with surprisingly few parameters, which can be useful in the context of decode-time algorithms for controllability and toxicity reduction. We explore contrastive conditioning to steer language model generation towards desirable text and away from undesirable text, and find that some complex prompts can be effectively compressed into a single token to guide generation. We also show that compressed prompts are largely compositional, and can be constructed such that they can be used to control independent aspects of generated text.

Cite

CITATION STYLE

APA

Wingate, D., Shoeybi, M., & Sorensen, T. (2022). Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 5650–5663). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.256

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free