Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

4Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Bias in applications of machine learning (ML) to healthcare is usually attributed to unrepresentative or incomplete data, or to underlying health disparities. This article identifies a more pervasive source of bias that affects the clinical utility of ML-enabled prediction tools: target specification bias. Target specification bias arises when the operationalization of the target variable does not match its definition by decision makers. The mismatch is often subtle, and stems from the fact that decision makers are typically interested in predicting the outcomes of counterfactual, rather than actual, healthcare scenarios. Target specification bias persists independently of data limitations and health disparities. When left uncorrected, it gives rise to an overestimation of predictive accuracy, to inefficient utilization of medical resources, and to suboptimal decisions that can harm patients. Recent work in metrology - the science of measurement - suggests ways of counteracting target specification bias and avoiding its harmful consequences.

Cite

CITATION STYLE

APA

Tal, E. (2023). Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare. In AIES 2023 - Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society (pp. 312–321). Association for Computing Machinery, Inc. https://doi.org/10.1145/3600211.3604678

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free