Abstract
Although conspiracy beliefs are often viewed as resistant to correction, recent evidence shows that personalized, fact-based dialogues with a large language model (LLM) can reduce them. Is this effect driven by the debunking facts and evidence, or does it rely on the messenger being an AI? In other words, would the same message be equally effective if delivered by a human? To answer this question, we conducted a preregistered experiment (N = 955) in which participants reported either a conspiracy belief or a nonconspiratorial but epistemically unwarranted belief and interacted with a LLM that argued against that belief using facts and evidence. We randomized whether the debunking LLM was characterized as an AI tool or a human expert and whether the model used human-like conversational tone. The conversations significantly reduced participants' confidence in both conspiracies and epistemically unwarranted beliefs, with no significant differences across conditions. Thus, AI persuasion is not reliant on the messenger being an AI model: it succeeds by generating compelling messages.
Author supplied keywords
Cite
CITATION STYLE
Boissin, E., Costello, T. H., Spinoza-Martín, D., Rand, D. G., & Pennycook, G. (2025). Dialogues with large language models reduce conspiracy beliefs even when the AI is perceived as human. PNAS Nexus, 4(11). https://doi.org/10.1093/pnasnexus/pgaf325
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.