Fairness in recommendation ranking through pairwise comparisons

312Citations
Citations of this article
273Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recommender systems are one of the most pervasive applications of machine learning in industry, with many services using them to match users to products or information. As such it is important to ask: what are the possible fairness risks, how can we quantify them, and how should we address them? In this paper we offer a set of novel metrics for evaluating algorithmic fairness concerns in recommender systems. In particular we show how measuring fairness based on pairwise comparisons from randomized experiments provides a tractable means to reason about fairness in rankings from recommender systems. Building on this metric, we offer a new regularizer to encourage improving this metric during model training and thus improve fairness in the resulting rankings. We apply this pairwise regularization to a large-scale, production recommender system and show that we are able to significantly improve the system's pairwise fairness.

Cite

CITATION STYLE

APA

Beutel, A., Chen, J., Doshi, T., Qian, H., Wei, L., Wu, Y., … Goodrow, C. (2019). Fairness in recommendation ranking through pairwise comparisons. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 2212–2220). Association for Computing Machinery. https://doi.org/10.1145/3292500.3330745

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free