Reducing Non-Normative Text Generation from Language Models

17Citations
Citations of this article
68Readers
Mendeley users who have this article in their library.

Abstract

Large-scale, transformer-based language models such as GPT-2 are pretrained on diverse corpora scraped from the internet. Consequently, they are prone to generating non-normative text (i.e. in violation of social norms). We introduce a technique for fine-tuning GPT-2, using a policy gradient reinforcement learning technique and a normative text classifier to produce reward and punishment values. We evaluate our technique on five data sets using automated and human participant experiments. The normative text classifier is 81-90% accurate when compared to gold-standard human judgements of normative and non-normative generated text. Our normative fine-tuning technique is able to reduce non-normative text by 27-61%, depending on the data set.

Cite

CITATION STYLE

APA

Peng, X., Li, S., Frazier, S., & Riedl, M. (2020). Reducing Non-Normative Text Generation from Language Models. In INLG 2020 - 13th International Conference on Natural Language Generation, Proceedings (pp. 374–383). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.inlg-1.43

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free