Algorithmic Harms and Algorithmic Wrongs

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

New artificial intelligence (AI) systems grounded in machine learning are being integrated into our lives at a rapid rate, but not without consequence: scholars across domains have increasingly pointed out issues related to privacy, transparency, bias, discrimination, exploitation, and exclusion associated with algorithmic systems in both public and private sector contexts. Concerns surrounding the adverse impacts of these technologies have spurred discussion on the topics of algorithmic harm. However, the overwhelming majority of articles on said harms offer no definition as to what constitutes 'harm' in these contexts. This paper aims to address this omission by introducing one criterion for a suitable account of algorithmic harm. More specifically, we follow Joel Feinberg in understanding harms as distinct from wrongs, where only the latter necessarily carry a normative dimension. This distinction highlights issues in the current scholarship surrounding the conflation of algorithmic harms and wrongs. In response to these issues, we put forth two requirements for upholding the harms/wrongs distinction when analyzing the increasingly far-reaching impacts of these technologies and suggest how this distinction can be useful in design, engineering, and policymaking.

Cite

CITATION STYLE

APA

Diberardino, N., Baleshta, C., & Stark, L. (2024). Algorithmic Harms and Algorithmic Wrongs. In 2024 ACM Conference on Fairness, Accountability, and Transparency, FAccT 2024 (pp. 1725–1732). Association for Computing Machinery, Inc. https://doi.org/10.1145/3630106.3659001

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free