The Conflict Between People’s Urge to Punish AI and Legal Systems

25Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Regulating artificial intelligence (AI) has become necessary in light of its deployment in high-risk scenarios. This paper explores the proposal to extend legal personhood to AI and robots, which had not yet been examined through the lens of the general public. We present two studies (N = 3,559) to obtain people’s views of electronic legal personhood vis-à-vis existing liability models. Our study reveals people’s desire to punish automated agents even though these entities are not recognized any mental state. Furthermore, people did not believe automated agents’ punishment would fulfill deterrence nor retribution and were unwilling to grant them legal punishment preconditions, namely physical independence and assets. Collectively, these findings suggest a conflict between the desire to punish automated agents and its perceived impracticability. We conclude by discussing how future design and legal decisions may influence how the public reacts to automated agents’ wrongdoings.

Cite

CITATION STYLE

APA

Lima, G., Cha, M., Jeon, C., & Park, K. S. (2021). The Conflict Between People’s Urge to Punish AI and Legal Systems. Frontiers in Robotics and AI, 8. https://doi.org/10.3389/frobt.2021.756242

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free