Artificial Moral Patients: Mentality, Intentionality, and Systematicity

  • Nye H
  • Yoldas T
N/ACitations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we defend three claims about what it will take for an AI system to be a basic moral patient to whom we can owe duties of non-maleficence not to harm her and duties of beneficence to benefit her: (1) Moral patients are mental patients; (2) Mental patients are true intentional systems; and (3) True intentional systems are systematically flexible. We suggest that we should be particularly alert to the possibility of such systematically flexible true intentional systems developing in the areas of exploratory robots and artificial personal assistants. Finally, we argue that in light of our failure to respect the well-being of existing biological moral patients and worries about our limited resources, there are compelling moral reasons to treat artificial moral patiency as something to be avoided at least for now.

Cite

CITATION STYLE

APA

Nye, H., & Yoldas, T. (2021). Artificial Moral Patients: Mentality, Intentionality, and Systematicity. The International Review of Information Ethics, 29. https://doi.org/10.29173/irie418

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free