Large datasets and novel statistical methods have given rise to a new wave of predictive algorithms that increasingly guide all manner of public and private decisions.1 Many (though not all) of these new-age predictive tools are generated by yet other algorithms—that is, by automated processes that scour a dataset for patterns and thereby construct a function for predicting, as best as the data permits, what will transpire in future cases.2 These predictive tools promise both vital information and refreshing objectivity: They avoid many of the recurring mistakes made by human predictors, and, of course, they hold no positive or negative attitudes toward any of the people whose fates they are asked to foretell.3 Yet a large and growing body of evidence shows that these predictive algorithms tend to predict bad outcomes—recidivism, falling behind on a loan, and more—far more often for members of socially disadvantaged groups than for others.4 And although the algorithms’ predictions may be equally accurate for members of different groups, the ways in which they err (when they do) differ: The algorithms tend more strongly toward mistaken pessimism when it comes to members of disadvantaged groups but more strongly toward mistaken optimism when it comes to members of advantaged groups.5 These disparities have fueled both technical and legal literatures about how different modifications to the predictive algorithms (or to the upstream algorithms that produce them) might achieve what many now term “algorithmic fairness.”6 This article takes up a more basic normative question that looms in the background of those debates: Why are the disparities that I have just described morally troubling at all?
CITATION STYLE
Eidelson, B. (2021). PATTERNED INEQUALITY, COMPOUNDING INJUSTICE, AND ALGORITHMIC PREDICTION. American Journal of Law and Equality, 1, 252–276. https://doi.org/10.1162/ajle_a_00017
Mendeley helps you to discover research relevant for your work.