ChatGPT as a Commenter to the News: Can LLMs Generate Human-Like Opinions?

  • Tseng R
  • Verberne S
  • van der Putten P
N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

ChatGPT, GPT-3.5, and other large language models (LLMs) have drawn significant attention since their release, and the abilities of these models have been investigated for a wide variety of tasks. In this research we investigate to what extent GPT-3.5 can generate human-like comments on Dutch news articles. We define human likeness as ‘not distinguishable from human comments’, approximated by the difficulty of automatic classification between human and GPT comments. We analyze human likeness across multiple prompting techniques. In particular, we utilize zero-shot, few-shot and context prompts, for two generated personas. We found that our fine-tuned BERT models can easily distinguish human-written comments from GPT-3.5 generated comments, with none of the used prompting methods performing noticeably better. We further analyzed that human comments consistently showed higher lexical diversity than GPT-generated comments. This indicates that although generative LLMs can generate fluent text, their capability to create human-like opinionated comments is still limited.

Cite

CITATION STYLE

APA

Tseng, R., Verberne, S., & van der Putten, P. (2023). ChatGPT as a Commenter to the News: Can LLMs Generate Human-Like Opinions? (pp. 160–174). https://doi.org/10.1007/978-3-031-47896-3_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free