Artificial intelligence has quickly integrated into human society and its moral decision-making has also begun to slowly seep into our lives. The significance of moral judgment research on artificial intelligence behavior is becoming increasingly prominent. The present research aims at examining how people make moral judgments about the behavior of artificial intelligence agents in a trolley dilemma where people are usually driven by controlled cognitive processes, and in a footbridge dilemma where people are usually driven by automatic emotional responses. Through three experiments (n = 626), we found that in the trolley dilemma (Experiment 1), the agent type rather than the actual action influenced people’s moral judgments. Specifically, participants rated AI agents’ behavior as more immoral and deserving of more blame than humans’ behavior. Conversely, in the footbridge dilemma (Experiment 2), the actual action rather than the agent type influenced people’s moral judgments. Specifically, participants rated action (a utilitarian act) as less moral and permissible and more morally wrong and blameworthy than inaction (a deontological act). A mixed-design experiment provided a pattern of results consistent with Experiment 1 and Experiment 2 (Experiment 3). This suggests that in different types of moral dilemmas, people adapt different modes of moral judgment to artificial intelligence, this may be explained by that when people make moral judgments in different types of moral dilemmas, they are engaging different processing systems.
CITATION STYLE
Zhang, Y., Wu, J., Yu, F., & Xu, L. (2023). Moral Judgments of Human vs. AI Agents in Moral Dilemmas. Behavioral Sciences, 13(2). https://doi.org/10.3390/bs13020181
Mendeley helps you to discover research relevant for your work.