Attesting digital discrimination using norms

4Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

More and more decisions are delegated to Machine Learning (ML) and automatic decision systems recently. Despite initial misconceptions considering these systems unbiased and fair, recent cases such as racist algorithms being used to inform parole decisions in the US, low-income neighborhood's targeted with high-interest loans and low credit scores, and women being undervalued by online marketing, fueled public distrust in machine learning. This poses a significant challenge to the adoption of ML by companies or public sector organisations, despite ML having the potential to lead to significant reductions in cost and more efficient decisions, and is motivating research in the area of algorithmic fairness and fair ML. Much of that research is aimed at providing detailed statistics, metrics and algorithms which are difficult to interpret and use by someone without technical skills. This paper tries to bridge the gap between lay users and fairness metrics by using simpler notions and concepts to represent and reason about digital discrimination. In particular, we use norms as an abstraction to communicate situations that may lead to algorithms committing discrimination. In particular, we formalise non-discrimination norms in the context of ML systems and propose an algorithm to attest whether ML systems violate these norms.

References Powered by Scopus

Elements of Information Theory

36610Citations
N/AReaders
Get full text

Fairness through awareness

2391Citations
N/AReaders
Get full text

Semantics derived automatically from language corpora contain human-like biases

1863Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Developing AI Literacy for Primary and Middle School Teachers in China: Based on a Structural Equation Modeling Analysis

47Citations
N/AReaders
Get full text

A Model for Governing Information Sharing in Smart Assistants

13Citations
N/AReaders
Get full text

Intersectional Experiences of Unfair Treatment Caused by Automated Computational Systems

7Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Criado, N., Ferrer, X., & Such, J. M. (2021). Attesting digital discrimination using norms. International Journal of Interactive Multimedia and Artificial Intelligence, 6(5), 16–23. https://doi.org/10.9781/ijimai.2021.02.008

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 4

40%

Professor / Associate Prof. 2

20%

Lecturer / Post doc 2

20%

Researcher 2

20%

Readers' Discipline

Tooltip

Computer Science 4

40%

Business, Management and Accounting 3

30%

Social Sciences 2

20%

Neuroscience 1

10%

Save time finding and organizing research with Mendeley

Sign up for free