Data-driven, decision-making technologies used in the justice system to inform decisions about bail, parole, and prison sentencing are biased against historically marginalized groups (Angwin, Larson, Mattu, & Kirchner 2016). But these technologies' judgments-which reproduce patterns of wrongful discrimination embedded in the historical datasets that they are trained on-are well-evidenced. This presents a puzzle: how can we account for the wrong these judgments engender without also indicting morally permissible statistical inferences about persons? I motivate this puzzle and attempt an answer.
CITATION STYLE
Castro, C. (2019). What’s Wrong with Machine Bias. Ergo, an Open Access Journal of Philosophy, 6(20201214). https://doi.org/10.3998/ergo.12405314.0006.015
Mendeley helps you to discover research relevant for your work.