Challenges in Translating Research to Practice for Evaluating Fairness and Bias in Recommendation Systems

6Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Calls to action to implement evaluation of fairness and bias into industry systems are increasing at a rapid rate. The research community has attempted to meet these demands by producing ethical principles and guidelines for AI, but few of these documents provide guidance on how to implement these principles in real world settings. Without readily available standardized and practice-tested approaches for evaluating fairness in recommendation systems, industry practitioners, who are often not experts, may easily run into challenges or implement metrics that are potentially poorly suited to their specific applications. When evaluating recommendations, practitioners are well aware they should evaluate their systems for unintended algorithmic harms, but the most important, and unanswered question, is how? In this talk, we will present practical challenges we encountered in addressing algorithmic responsibility in recommendation systems, which also present research opportunities for the RecSys community. This talk will focus on the steps that need to happen before bias mitigation can even begin.

Cite

CITATION STYLE

APA

Beattie, L., Taber, D., & Cramer, H. (2022). Challenges in Translating Research to Practice for Evaluating Fairness and Bias in Recommendation Systems. In RecSys 2022 - Proceedings of the 16th ACM Conference on Recommender Systems (pp. 528–530). Association for Computing Machinery, Inc. https://doi.org/10.1145/3523227.3547403

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free