GPT-fabricated features, spread, and implications for preempting evidence manipulation

18Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar easily locates and lists these questionable papers alongside reputable, quality-controlled research. Our analysis of a selection of questionable GPT-fabricated scientific papers found in Google Scholar shows that many are about applied, often controversial topics susceptible to disinformation: the environment, health, and computing. The resulting enhanced potential for malicious manipulation of society's evidence base, particularly in politically divisive domains, is a growing concern.

Cite

CITATION STYLE

APA

Haider, J., Söderström, K. R., Ekström, B., & Rödl, M. (2024). GPT-fabricated features, spread, and implications for preempting evidence manipulation. Harvard Kennedy School Misinformation Review, 5(5), 1–16. https://doi.org/10.37016/mr-2020-156

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free