IDENTIFYING PREDICTION MISTAKES IN OBSERVATIONAL DATA ∗

1Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

Decision makers, such as doctors, judges, and managers, make consequential choices based on predictions of unknown outcomes. Do these decision makers make systematic prediction mistakes based on the available information? If so, in what ways are their predictions systematically biased? In this article, I characterize conditions under which systematic prediction mistakes can be identified in empirical settings such as hiring, medical diagnosis, and pretrial release. I derive a statistical test for whether the decision maker makes systematic prediction mistakes under these assumptions and provide methods for estimating the ways the decision maker’s predictions are systematically biased. I analyze the pretrial release decisions of judges in New York City, estimating that at least 20% of judges make systematic prediction mistakes about misconduct risk given defendant characteristics. Motivated by this analysis, I estimate the effects of replacing judges with algorithmic decision rules and find that replacing judges with algorithms where systematic prediction mistakes occur dominates the status quo. JEL codes: C10, C55, D81, D84, K40.

Cite

CITATION STYLE

APA

Rambachan, A. (2024). IDENTIFYING PREDICTION MISTAKES IN OBSERVATIONAL DATA ∗. Quarterly Journal of Economics, 139(3), 1665–1711. https://doi.org/10.1093/qje/qjae013

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free