Evaluation analytics for public health: Has reducing air pollution reduced death rates in the United States?

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An aim of applied science in general, and of epidemiology in particular, is to draw sound causal inferences from observations. For public health policy analysts and epidemiologists, this includes drawing inferences about whether historical changes in exposures have actually caused the consequences predicted for, or attributed to, them. The example of the Dublin coal-burning ban introduced in Chap. 1 suggests that accurate evaluation of the effect of interventions is not always easy, even when data are plentiful. Students are taught to develop hypotheses about causal relations, devise testable implications of these causal hypotheses, carry out the tests, and objectively report and learn from the results to refute or refine the initial hypotheses. For at least the past two decades, however, epidemiologists and commentators on scientific methods and results have raised concerns that current practices too often lead to false-positive findings and to mistaken attributions of causality to mere statistical associations (Lehrer 2012; Sarewitz 2012; Ottenbacher 1998; Imberger et al. 2011). Formal training in epidemiology may be a mixed blessing in addressing these concerns. As discussed in Chap. 2, concepts such as “attributable risk,” “population attributable fraction,” “burden of disease,” “etiologic fraction,” and even “probability of causation” are solidly based on relative risks and related measures of statistical association; they do not necessarily reveal anything about predictive, manipulative, structural, or explanatory (mechanistic) causation (e.g., Cox 2013; Greenland and Brumback 2002). Limitations of human judgment and inference, such as confirmation bias (finding what we expect to find), motivated reasoning (concluding what it pays us to conclude), and overconfidence (mistakenly believing that our own beliefs are more accurate than they really are), do not spare health effects investigators. Experts in the health effects of particular compounds are not always also experts in causal analysis, and published causal conclusions are often unwarranted, as reviewed in Chap. 2, with a pronounced bias toward finding “significant” effects where none actually exists (false positives) (Lehrer 2012; Sarewitz 2012; Ioannidis 2005; The Economist 2013).

Cite

CITATION STYLE

APA

Cox, L. A., Popken, D. A., & Sun, R. X. (2018). Evaluation analytics for public health: Has reducing air pollution reduced death rates in the United States? In International Series in Operations Research and Management Science (Vol. 270, pp. 417–442). Springer New York LLC. https://doi.org/10.1007/978-3-319-78242-3_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free