Collective Responsibility and Artificial Intelligence

5Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The use of artificial intelligence (AI) to make high-stakes decisions is sometimes thought to create a troubling responsibility gap – that is, a situation where nobody can be held morally responsible for the outcomes that are brought about. However, philosophers and practitioners have recently claimed that, even though no individual can be held morally responsible, groups of individuals might be. Consequently, they think, we have less to fear from the use of AI than might appear to be the case. This paper assesses this claim. Drawing on existing philosophical models of collective responsibility, I consider whether changing focus from the individual to the collective level can help us identify a locus of responsibility in a greater range of cases of AI deployment. I find that appeal to collective responsibility will be of limited use in filling the responsibility gap: the models considered either do not apply to the case at hand or else the relevant sort of collective responsibility, even if present, will not be sufficient to remove the costs that are often associated with an absence of responsibility.

Cite

CITATION STYLE

APA

Taylor, I. (2024). Collective Responsibility and Artificial Intelligence. Philosophy and Technology, 37(1). https://doi.org/10.1007/s13347-024-00718-y

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free