Debiased off-policy evaluation for recommendation systems

11Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Efficient methods to evaluate new algorithms are critical for improving interactive bandit and reinforcement learning systems such as recommendation systems. A/B tests are reliable, but are time-and money-consuming, and entail a risk of failure. In this paper, we develop an alternative method, which predicts the performance of algorithms given historical data that may have been generated by a different algorithm. Our estimator has the property that its prediction converges in probability to the true performance of a counterfactual algorithm at a rate of, as the sample size N increases. We also show a correct way to estimate the variance of our prediction, thus allowing the analyst to quantify the uncertainty in the prediction. These properties hold even when the analyst does not know which among a large number of potentially important state variables are actually important. We validate our method by a simulation experiment about reinforcement learning. We finally apply it to improve advertisement design by a major advertisement company. We find that our method produces smaller mean squared errors than state-of-the-art methods.

Cite

CITATION STYLE

APA

Narita, Y., Yasui, S., & Yata, K. (2021). Debiased off-policy evaluation for recommendation systems. In RecSys 2021 - 15th ACM Conference on Recommender Systems (pp. 372–379). Association for Computing Machinery, Inc. https://doi.org/10.1145/3460231.3474231

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free