Reported evidence of biased matchmaking calls into question the ethicality of recommendations generated by a machine learning algorithm. In the context of dating services, the failure of an automated matchmaker to respect the user’s expressed sensitive preferences (racial, religious, etc.) may lead to biased decisions perceived by users as unfair. To address the issue, we introduce the notion of preferential fairness, and propose two algorithmic approaches for re-ranking the recommendations under preferential fairness constraints. Our experimental results demonstrate that the state of fairness can be reached with minimal accuracy compromises for both binary and non-binary attributes.
CITATION STYLE
Paraschakis, D., & Nilsson, B. J. (2020). Matchmaking under fairness constraints: A speed dating case study. In Communications in Computer and Information Science (Vol. 1245 CCIS, pp. 43–57). Springer. https://doi.org/10.1007/978-3-030-52485-2_5
Mendeley helps you to discover research relevant for your work.