Controllable Generation from Pre-trained Language Models via Inverse Prompting

35Citations
Citations of this article
121Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Large-scale pre-trained language models have demonstrated strong capabilities of generating realistic texts. However, it remains challenging to control the generation results. Previous approaches such as prompting are far from sufficient, and lack of controllability limits the usage of language models. To tackle this challenge, we propose an innovative method, inverse prompting, to better control text generation. The core idea of inverse prompting is to use generated text to inversely predict the prompt during beam search, which enhances the relevance between the prompt and the generated text and thus improves controllability. Empirically, we pre-train a large-scale Chinese language model to perform a systematic study using human evaluation on the tasks of open-domain poem generation and open-domain long-form question answering. Results demonstrate that our proposed method substantially outperforms the baselines and that our generation quality is close to human performance on some of the tasks.

Cite

CITATION STYLE

APA

Zou, X., Yin, D., Zhong, Q., Yang, H., Yang, Z., & Tang, J. (2021). Controllable Generation from Pre-trained Language Models via Inverse Prompting. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 2450–2460). Association for Computing Machinery. https://doi.org/10.1145/3447548.3467418

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free