Algorithmic Bias and Fairness in Case-Based Reasoning

2Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Algorithmic bias due to underestimation refers to situations where an algorithm under-predicts desirable outcomes for a protected minority. In this paper we show how this can be addressed in a case-based reasoning (CBR) context by a metric learning strategy that explicitly considers bias/fairness. Since one of the advantages CBR has over alternative machine learning approaches is interpretability, it is interesting to see how much this metric learning distorts the case-retrieval process. We find that bias is addressed with a minimum impact on case-based predictions - little more than the predictions that need to be changed are changed. However, the effect on explanation is more significant as the case-retrieval order is impacted.

Cite

CITATION STYLE

APA

Blanzeisky, W., Smyth, B., & Cunningham, P. (2022). Algorithmic Bias and Fairness in Case-Based Reasoning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13405 LNAI, pp. 48–62). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-14923-8_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free