LLMs Among Us: Generative AI Participating in Digital Discourse

8Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

The emergence of Large Language Models (LLMs) has great potential to reshape the landscape of many social media platforms. While this can bring promising opportunities, it also raises many threats, such as biases and privacy concerns, and may contribute to the spread of propaganda by malicious actors. We developed the “LLMs Among Us” experimental framework on top of the Mastodon social media platform for bot and human participants to communicate without knowing the ratio or nature of bot and human participants. We built 10 personas with three different LLMs, GPT-4, Llama 2 Chat, and Claude. We conducted three rounds of the experiment and surveyed participants after each round to measure the ability of LLMs to pose as human participants without human detection. We found that participants correctly identified the nature of other users in the experiment only 42% of the time despite knowing the presence of both bots and humans. We also found that the choice of persona had substantially more impact on human perception than the choice of mainstream LLMs.

Cite

CITATION STYLE

APA

Radivojevic, K., Clark, N., & Brenner, P. (2024). LLMs Among Us: Generative AI Participating in Digital Discourse. In AAAI Spring Symposium - Technical Report (Vol. 3, pp. 209–218). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaaiss.v3i1.31202

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free