The Prevalence and Severity of Underreporting Bias in Machine- and Human-Coded Data

11Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Textual data are plagued by underreporting bias. For example, news sources often fail to report human rights violations. Cook et al. propose a multi-source estimator to gauge, and to account for, the underreporting of state repression events within human codings of news texts produced by the Agence France-Presse and Associated Press. We evaluate this estimator with Monte Carlo experiments, and then use it to compare the prevalence and seriousness of underreporting when comparable texts are machine coded and recorded in the World-Integrated Crisis Early Warning System dataset. We replicate Cook et al.'s investigation of human-coded state repression events with our machine-coded events, and validate both models against an external measure of human rights protections in Africa. We then use the Cook et al. estimator to gauge the seriousness and prevalence of underreporting in machine and human-coded event data on human rights violations in Colombia. We find in both applications that machine-coded data are as valid as human-coded data.

Cite

CITATION STYLE

APA

Bagozzi, B. E., Brandt, P. T., Freeman, J. R., Holmes, J. S., Kim, A., Palao Mendizabal, A., & Potz-Nielsen, C. (2019). The Prevalence and Severity of Underreporting Bias in Machine- and Human-Coded Data. In Political Science Research and Methods (Vol. 7, pp. 641–649). Cambridge University Press. https://doi.org/10.1017/psrm.2018.11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free