Large Language Models respond to Influence like Humans

12Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Two studies tested the hypothesis that a Large Language Model (LLM) can be used to model psychological change following exposure to influential input. The first study tested a generic mode of influence - the Illusory Truth Effect (ITE) - where earlier exposure to a statement boosts a later truthfulness test rating. Analysis of newly collected data from human and LLM-simulated subjects (1000 of each) showed the same pattern of effects in both populations; although with greater per statement variability for the LLM. The second study concerns a specific mode of influence – populist framing of news to increase its persuasion and political mobilization. Newly collected data from simulated subjects was compared to previously published data from a 15-country experiment on 7286 human participants. Several effects from the human study were replicated by the simulated study, including ones that surprised the authors of the human study by contradicting their theoretical expectations; but some significant relationships found in human data were not present in the LLM data. Together the two studies support the view that LLMs have potential to act as models of the effect of influence.

Cite

CITATION STYLE

APA

Griffin, L. D., Kleinberg, B., Mozes, M., Mai, K., Vau, M., Caldwell, M., & Mavor-Parker, A. (2023). Large Language Models respond to Influence like Humans. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 15–24). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.sicon-1.3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free