Attributions toward artificial agents in a modified Moral Turing Test

6Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24–28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source. Remarkably, they rated the AI’s moral reasoning as superior in quality to humans’ along almost all dimensions, including virtuousness, intelligence, and trustworthiness, consistent with passing what Allen and colleagues call the comparative MTT. Next, when tasked with identifying the source of each evaluation (human or computer), people performed significantly above chance levels. Although the AI did not pass this test, this was not because of its inferior moral reasoning but, potentially, its perceived superiority, among other possible explanations. The emergence of language models capable of producing moral responses perceived as superior in quality to humans’ raises concerns that people may uncritically accept potentially harmful moral guidance from AI. This possibility highlights the need for safeguards around generative language models in matters of morality.

References Powered by Scopus

Interrater reliability: The kappa statistic

12910Citations
N/AReaders
Get full text

Minds, brains, and programs

3520Citations
N/AReaders
Get full text

The social dilemma of autonomous vehicles

1007Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Psychomatics-A Multidisciplinary Framework for Understanding Artificial Minds

1Citations
N/AReaders
Get full text

Augmenting intensive care unit nursing practice with generative AI: A formative study of diagnostic synergies using simulation-based clinical cases

1Citations
N/AReaders
Get full text

AI language model rivals expert ethicist in perceived moral expertise

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Aharoni, E., Fernandes, S., Brady, D. J., Alexander, C., Criner, M., Queen, K., … Crespo, V. (2024). Attributions toward artificial agents in a modified Moral Turing Test. Scientific Reports, 14(1). https://doi.org/10.1038/s41598-024-58087-7

Readers over time

‘24‘25015304560

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 8

40%

Researcher 7

35%

Lecturer / Post doc 3

15%

Professor / Associate Prof. 2

10%

Readers' Discipline

Tooltip

Psychology 10

50%

Social Sciences 5

25%

Computer Science 3

15%

Engineering 2

10%

Article Metrics

Tooltip
Mentions
Blog Mentions: 4
News Mentions: 14
Social Media
Shares, Likes & Comments: 11

Save time finding and organizing research with Mendeley

Sign up for free
0