Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model

7Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

While large language models have proven effective in a huge range of downstream applications, they often generate text that is problematic or lacks a desired attribute. In this paper, we introduce Reward-Augmented Decoding (RAD), a text generation procedure that uses a small unidirectional reward model to encourage a language model to generate text that has certain properties. Specifically, RAD uses the reward model to score generations as they are produced and rescales sampling probabilities to favor high-reward tokens. By using a unidirectional reward model, RAD can cache activations from prior generation steps to decrease computational overhead. Through experiments on generating non-toxic and sentiment-controlled text, we demonstrate that RAD performs best among methods that change only the generation procedure and matches the performance of state-of-the-art methods that involve re-training the language model. We further validate that RAD is effective on very large language models while incurring a minimal computational overhead.

Cite

CITATION STYLE

APA

Deng, H., & Raffel, C. (2023). Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 11781–11791). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.721

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free