The Point of Blaming AI Systems

  • Altehenger H
  • Menges L
N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

As Christian List has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among other things, that we ask whether it makes sense to extend our blaming practices to these systems. In this paper, we argue for the admittedly surprising thesis that this question should be answered in the affirmative: contrary to what one might initially think, it can make a lot of sense to blame AI systems since, as we furthermore argue, many of the important functions that are fulfilled by blaming humans can also be served by blaming AI systems. The paper concludes that this result gives us a good pro tanto reason to extend our blame practices to AI systems.

Cite

CITATION STYLE

APA

Altehenger, H., & Menges, L. (2024). The Point of Blaming AI Systems. Journal of Ethics and Social Philosophy, 27(2). https://doi.org/10.26556/jesp.v27i2.3060

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free