Algorithms are not bias-free: Four mini-cases

4Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Algorithms are everywhere. As we move closer to a fully automated world, algorithms need to be investigated for possible biases and treated as imperfect systems. This case study discusses how cognitive biases infiltrate and influence algorithmic biases in everyday technologies. Mental shortcuts do not just create biases in our minds, but also create biases in algorithms through development and implementation processes. Two key questions are examined in this case study: (1) Where are algorithmic biases? (2) How do algorithmic biases occur? To further discuss these two questions, four mini-cases are used to highlight different examples of algorithmic biases. The four mini-cases cover the topics of crime identification, suspect determination, teacher evaluation, and school classification.

Cite

CITATION STYLE

APA

Yan, S. (2021). Algorithms are not bias-free: Four mini-cases. Human Behavior and Emerging Technologies, 3(5), 1180–1184. https://doi.org/10.1002/hbe2.289

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free