Responsibility assignment won’t solve the moral issues of artificial intelligence

  • Heinrichs J
N/ACitations
Citations of this article
29Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Who is responsible for the events and consequences caused by using artificially intelligent tools, and is there a gap between what human agents can be responsible for and what is being done using artificial intelligence? Both questions presuppose that the term ‘responsibility’ is a good tool for analysing the moral issues surrounding artificial intelligence. This article will draw this presupposition into doubt and show how reference to responsibility obscures the complexity of moral situations and moral agency, which can be analysed with a more differentiated toolset of moral terminology. It suggests that the impression of responsibility gaps only occurs if we gloss over the complexity of the moral situation in which artificial intelligent tools are employed and if—counterfactually—we ascribe them some kind of pseudo-agential status.

Cite

CITATION STYLE

APA

Heinrichs, J.-H. (2022). Responsibility assignment won’t solve the moral issues of artificial intelligence. AI and Ethics, 2(4), 727–736. https://doi.org/10.1007/s43681-022-00133-z

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free