Human Perception in Natural Language Generation

4Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We take a collection of short texts, some of which are human-written, while others are automatically generated, and ask subjects, who are unaware of the texts’ source, whether they perceive them as human-produced. We use this data to fine-tune a GPT-2 model to push it to generate more human-like texts, and observe that the production of this fine-tuned model is indeed perceived as more human-like than that of the original model. Contextually, we show that our automatic evaluation strategy correlates well with human judgements. We also run a linguistic analysis to unveil the characteristics of human- vs machine-perceived language.

Cite

CITATION STYLE

APA

De Mattei, L., Lai, H., Dell’Orletta, F., & Nissim, M. (2021). Human Perception in Natural Language Generation. In GEM 2021 - 1st Workshop on Natural Language Generation, Evaluation, and Metrics, Proceedings (pp. 15–23). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.gem-1.2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free