Assessing algorithmic fairness without sensitive information

0Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As the prevalence of algorithmic decision-making increases, so does the study of algorithmic fairness. When this aspect is disregarded, bias and discrimination are created, reproduced or amplified. Accordingly, work has been done to harmonize definitions of fairness and categorize ways to improve it. While using demographic data about the protected group is a possible solution, in real-world applications privacy concerns as well as uncertainty about the relevant attributes make it unrealistic. Consequently, we seek in this work to provide an overview of the methods that do not require such data, to identify which areas might be under-researched and to propose research questions for the first phase of the PhD. The influence of datasets size in the discovery and mitigation of unknown biases appears to be such an area, one that we plan to explore more fully during the thesis.

Cite

CITATION STYLE

APA

Noiret, S. (2021). Assessing algorithmic fairness without sensitive information. In GoodIT 2021 - Proceedings of the 2021 Conference on Information Technology for Social Good (pp. 325–328). Association for Computing Machinery, Inc. https://doi.org/10.1145/3462203.3475894

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free