Biased Programmers? or Biased Data? A Field Experiment in Operationalizing AI Ethics

48Citations
Citations of this article
131Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Why do biased algorithmic predictions arise, and what interventions can prevent them? We examine this topic with a field experiment about using machine learning to predict human capital. We randomly assign approximately 400 AI engineers to develop software under different experimental conditions to predict standardized test scores of OECD residents. We then assess the resulting predictive algorithms using the realized test performances, and through randomized audit-like manipulations of algorithmic inputs. We also used the diversity of our subject population to measure whether demographically non-traditional engineers were more likely to notice and reduce algorithmic bias, and whether algorithmic prediction errors are correlated within programmer demographic groups. This document describes our experimental design and motivation; the full results of our experiment are available at https://ssrn.com/abstract=3615404.

Author supplied keywords

Cite

CITATION STYLE

APA

Cowgill, B., Dell’acqua, F., Deng, S., Hsu, D., Verma, N., & Chaintreau, A. (2020). Biased Programmers? or Biased Data? A Field Experiment in Operationalizing AI Ethics. In EC 2020 - Proceedings of the 21st ACM Conference on Economics and Computation (pp. 679–681). Association for Computing Machinery. https://doi.org/10.1145/3391403.3399545

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free