Moral consideration for AI systems by 2030

  • Sebo J
  • Long R
N/ACitations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper makes a simple case for extending moral consideration to some AI systems by 2030. It involves a normative premise and a descriptive premise. The normative premise is that humans have a duty to extend moral consideration to beings that have a non-negligible chance, given the evidence, of being conscious. The descriptive premise is that some AI systems do in fact have a non-negligible chance, given the evidence, of being conscious by 2030. The upshot is that humans have a duty to extend moral consideration to some AI systems by 2030. And if we have a duty to do that, then we plausibly also have a duty to start preparing now, so that we can be ready to treat AI systems with respect and compassion when the time comes.

Cite

CITATION STYLE

APA

Sebo, J., & Long, R. (2023). Moral consideration for AI systems by 2030. AI and Ethics. https://doi.org/10.1007/s43681-023-00379-1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free