Rating Distribution Calibration for Selection Bias Mitigation in Recommendations

14Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Real-world recommendation datasets have been shown to be subject to selection bias, which can challenge recommendation models to learn real preferences of users, so as to make accurate recommendations. Existing approaches to mitigate selection bias, such as data imputation and inverse propensity score, are sensitive to the quality of the additional imputation or propensity estimation models. To break these limitations, in this work, we propose a novel self-supervised learning (SSL) framework, i.e., Rating Distribution Calibration (RDC), to tackle selection bias without introducing additional models. In addition to the original training objective, we introduce a rating distribution calibration loss. It aims to correct the predicted rating distribution of biased users by taking advantage of that of their similar unbiased users. We empirically evaluate RDC on two real-world datasets and one synthetic dataset. The experimental results show that RDC outperforms the original model as well as the state-of-the-art debiasing approaches by a significant margin.

Cite

CITATION STYLE

APA

Liu, H., Tang, D., Yang, J., Zhao, X., Liu, H., Tang, J., & Cheng, Y. (2022). Rating Distribution Calibration for Selection Bias Mitigation in Recommendations. In WWW 2022 - Proceedings of the ACM Web Conference 2022 (pp. 2048–2057). Association for Computing Machinery, Inc. https://doi.org/10.1145/3485447.3512078

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free