FaiRecSys: mitigating algorithmic bias in recommender systems

39Citations
Citations of this article
72Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recommendation and personalization are useful technologies which influence more and more our daily decisions. However, as we show empirically in this paper, the bias that exists in the real world and which is reflected in the training data can be modeled and amplified by recommender systems and in the end returned as biased recommendations to the users. This feedback process creates a self-perpetuating loop which progressively strengthens the filter bubbles we live in. Biased recommendations can also reinforce stereotypes such as those based on gender or ethnicity, possibly resulting in disparate impact. In this paper we address the problem of algorithmic bias in recommender systems. In particular, we highlight the connection between predictability of sensitive features and bias in the results of recommendations and we then offer a theoretically founded bound on recommendation bias based on that connection. We continue to formalize a fairness constraint and the price that one has to pay, in terms of alterations in the recommendation matrix, in order to achieve fair recommendations. Finally, we propose FaiRecSys—an algorithm that mitigates algorithmic bias by post-processing the recommendation matrix with minimum impact on the utility of recommendations provided to the end-users.

Cite

CITATION STYLE

APA

Edizel, B., Bonchi, F., Hajian, S., Panisson, A., & Tassa, T. (2020). FaiRecSys: mitigating algorithmic bias in recommender systems. International Journal of Data Science and Analytics, 9(2), 197–213. https://doi.org/10.1007/s41060-019-00181-5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free