Experimental evaluation of algorithm-assisted human decision-making: application to pretrial public safety assessment

21Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Despite an increasing reliance on fully-automated algorithmic decision-making in our day-to-day lives, humans still make consequential decisions. While the existing literature focuses on the bias and fairness of algorithmic recommendations, an overlooked question is whether they improve human decisions. We develop a general statistical methodology for experimentally evaluating the causal impacts of algorithmic recommendations on human decisions. We also examine whether algorithmic recommendations improve the fairness of human decisions and derive the optimal decision rules under various settings. We apply the proposed methodology to the first-ever randomized controlled trial that evaluates the pretrial Public Safety Assessment in the United States criminal justice system. Our analysis of the preliminary data shows that providing the PSA to the judge has little overall impact on the judge’s decisions and subsequent arrestee behaviour.

Cite

CITATION STYLE

APA

Imai, K., Jiang, Z., Greiner, D. J., Halen, R., & Shin, S. (2023). Experimental evaluation of algorithm-assisted human decision-making: application to pretrial public safety assessment. Journal of the Royal Statistical Society. Series A: Statistics in Society, 186(2), 167–189. https://doi.org/10.1093/jrsssa/qnad010

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free