Localized fairness in recommender systems

14Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent research in fairness in machine learning has identified situations in which biases in input data can cause harmful or unwanted effects. Researchers in the areas of personalization and recommendation have begun to study similar types of bias. What these lines of research share is a fixed representation of the protected groups relative to which bias must be monitored. However, in some real-world application contexts, such groups cannot be defined apri-ori, but must be derived from the data itself. Furthermore, as we show, it may be insufficient in such cases to examine global system properties to identify protected groups. Thus, we demonstrate that fairness may be local, and the identification of protected groups only possible through consideration of local conditions.

Author supplied keywords

Cite

CITATION STYLE

APA

Sonboli, N., & Burke, R. (2019). Localized fairness in recommender systems. In ACM UMAP 2019 Adjunct - Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization (pp. 295–300). Association for Computing Machinery, Inc. https://doi.org/10.1145/3314183.3323845

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free