Will intelligent machines become moral patients?

9Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then argue that intelligent machines are no different from traditional artifacts in this respect. To make this argument, I examine the feature of AIs that enables them to improve their intelligence, i.e., machine learning. I argue that there is no reason to believe that future advances in machine learning will take AIs closer to having a good of their own. I thus argue that concerns about the moral status of future AIs are unwarranted. Nothing about the nature of intelligent machines makes them a better candidate for acquiring moral patiency than the traditional artifacts whose moral status does not concern us.

Cite

CITATION STYLE

APA

Moosavi, P. (2024). Will intelligent machines become moral patients? Philosophy and Phenomenological Research, 109(1), 95–116. https://doi.org/10.1111/phpr.13019

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free