Is AI a Problem for Forward Looking Moral Responsibility? The Problem Followed by a Solution

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent work in AI ethics has come to bear on questions of responsibility. Specifically, questions of whether the nature of AI-based systems render various notions of responsibility inappropriate. While substantial attention has been given to backward-looking senses of responsibility, there has been little consideration of forward-looking senses of responsibility. This paper aims to plug this gap, and will concern itself with responsibility as moral obligation, a particular kind of forward-looking sense of responsibility. Responsibility as moral obligation is predicated on the idea that agents have at least some degree of control over the kinds of systems they create and deploy. AI systems, by virtue of their ability to learn from experience once deployed, and their often experimental nature, may therefore pose a significant challenge to forward-looking responsibility. Such systems might not be able to have their course altered, and so even if their initial programming determines their goals, the means by which they achieve these goals may be outside the control of human operators. In cases such as this, we might say that there is a gap in moral obligation. However, in this paper, I argue that there are no “gaps” in responsibility as moral obligation, as this question comes to bear on AI systems. I support this conclusion by focusing on the nature of risks when developing technology, and by showing that technological assessment is not only about the consequences that a specific technology might have. Technological assessment is more than merely consequentialist, and should also include a hermeneutic component, which looks at the societal meaning of the system. Therefore, while it may be true that the creators of AI systems might not be able to fully appreciate what the consequences of their systems might be, this does not undermine or render improper their responsibility as moral obligation.

Cite

CITATION STYLE

APA

Tollon, F. (2022). Is AI a Problem for Forward Looking Moral Responsibility? The Problem Followed by a Solution. In Communications in Computer and Information Science (Vol. 1551 CCIS, pp. 307–318). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-95070-5_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free