Gradient-Based Constrained Sampling from Language Models

19Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

Abstract

Large pretrained language models generate fluent text but are notoriously hard to controllably sample from. In this work, we study constrained sampling from such language models: generating text that satisfies user-defined constraints, while maintaining fluency and model's performance in a downstream task. We propose MUCOLA-a sampling procedure that combines the log-likelihood of the language model with arbitrary (differentiable) constraints in a single energy function, and then generates samples in a non-autoregressive manner. Specifically, it initializes the entire output sequence with noise and follows a Markov chain defined by Langevin Dynamics using the gradients of the energy function. We evaluate MUCOLA on text generation with soft and hard constraints as well as their combinations obtaining significant improvements over competitive baselines for toxicity avoidance, sentiment control, and keyword-guided generation.

Cite

CITATION STYLE

APA

Kumar, S., Paria, B., & Tsvetkov, Y. (2022). Gradient-Based Constrained Sampling from Language Models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 2251–2277). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.144

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free