Rank List Sensitivity of Recommender Systems to Interaction Perturbations

26Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Prediction models can exhibit sensitivity with respect to training data: small changes in the training data can produce models that assign conflicting predictions to individual data points during test time. In this work, we study this sensitivity in recommender systems, where users' recommendations are drastically altered by minor perturbations in other unrelated users' interactions. We introduce a measure of stability for recommender systems, called Rank List Sensitivity (RLS), which measures how rank lists generated by a given recommender system at test time change as a result of a perturbation in the training data. We develop a method, CASPER, which uses cascading effect to identify the minimal and systematical perturbation to induce higher instability in a recommender system. Experiments on four datasets show that recommender models are overly sensitive to minor perturbations introduced randomly or via CASPER - even perturbing one random interaction of one user drastically changes the recommendation lists of all users.Importantly, with CASPER perturbation, the models generate more unstable recommend ations for low-accuracy users (i.e., those who receive low-quality recommendations) than high-accuracy ones.

Cite

CITATION STYLE

APA

Oh, S., Ustun, B., McAuley, J., & Kumar, S. (2022). Rank List Sensitivity of Recommender Systems to Interaction Perturbations. In International Conference on Information and Knowledge Management, Proceedings (pp. 1584–1594). Association for Computing Machinery. https://doi.org/10.1145/3511808.3557425

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free