We develop a simple structural model to illustrate how penalized regressions generate Goodhart bias when training data are clean but covariates are manipulated at known cost by future agents. With quadratic (extremely steep) manipulation costs, bias is proportional to Ridge (Lasso) penalization. If costs depend on absolute or percentage manipulation, the following algorithm yields manipulation-proof prediction: Within training data, evaluate candidate coefficients at their respective incentive-compatible manipulation configuration. We derive analytical coefficient adjustments: slopes (intercept) shift downward if costs depend on percentage (absolute) manipulation. Statisticians ignoring manipulation costs select socially suboptimal penalization. Model averaging reduces these manipulation costs.
CITATION STYLE
Hennessy, C. A., & Goodhart, C. A. E. (2023). GOODHART’S LAW AND MACHINE LEARNING: A STRUCTURAL PERSPECTIVE. International Economic Review, 64(3), 1075–1086. https://doi.org/10.1111/iere.12633
Mendeley helps you to discover research relevant for your work.