Algorithmic bias due to underestimation refers to situations where an algorithm under-predicts desirable outcomes for a protected minority. In this paper we show how this can be addressed in a case-based reasoning (CBR) context by a metric learning strategy that explicitly considers bias/fairness. Since one of the advantages CBR has over alternative machine learning approaches is interpretability, it is interesting to see how much this metric learning distorts the case-retrieval process. We find that bias is addressed with a minimum impact on case-based predictions - little more than the predictions that need to be changed are changed. However, the effect on explanation is more significant as the case-retrieval order is impacted.
CITATION STYLE
Blanzeisky, W., Smyth, B., & Cunningham, P. (2022). Algorithmic Bias and Fairness in Case-Based Reasoning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13405 LNAI, pp. 48–62). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-14923-8_4
Mendeley helps you to discover research relevant for your work.