Algorithms are everywhere. As we move closer to a fully automated world, algorithms need to be investigated for possible biases and treated as imperfect systems. This case study discusses how cognitive biases infiltrate and influence algorithmic biases in everyday technologies. Mental shortcuts do not just create biases in our minds, but also create biases in algorithms through development and implementation processes. Two key questions are examined in this case study: (1) Where are algorithmic biases? (2) How do algorithmic biases occur? To further discuss these two questions, four mini-cases are used to highlight different examples of algorithmic biases. The four mini-cases cover the topics of crime identification, suspect determination, teacher evaluation, and school classification.
CITATION STYLE
Yan, S. (2021). Algorithms are not bias-free: Four mini-cases. Human Behavior and Emerging Technologies, 3(5), 1180–1184. https://doi.org/10.1002/hbe2.289
Mendeley helps you to discover research relevant for your work.