This paper makes a simple case for extending moral consideration to some AI systems by 2030. It involves a normative premise and a descriptive premise. The normative premise is that humans have a duty to extend moral consideration to beings that have a non-negligible chance, given the evidence, of being conscious. The descriptive premise is that some AI systems do in fact have a non-negligible chance, given the evidence, of being conscious by 2030. The upshot is that humans have a duty to extend moral consideration to some AI systems by 2030. And if we have a duty to do that, then we plausibly also have a duty to start preparing now, so that we can be ready to treat AI systems with respect and compassion when the time comes.
CITATION STYLE
Sebo, J., & Long, R. (2023). Moral consideration for AI systems by 2030. AI and Ethics. https://doi.org/10.1007/s43681-023-00379-1
Mendeley helps you to discover research relevant for your work.