Optimizing Long-term Value for Auction-Based Recommender Systems via On-Policy Reinforcement Learning

1Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Auction-based recommender systems are prevalent in online advertising platforms, but they are typically optimized to allocate recommendation slots based on immediate expected return metrics, neglecting the downstream effects of recommendations on user behavior. In this study, we employ reinforcement learning to optimize for long-term return metrics in an auction-based recommender system. Utilizing temporal difference learning, a fundamental reinforcement learning algorithm, we implement a one-step policy improvement approach that biases the system towards recommendations with higher long-term user engagement metrics. This optimizes value over long horizons while maintaining compatibility with the auction framework. Our approach is grounded in dynamic programming ideas which show that our method provably improves upon the existing auction-based base policy. Through an online A/B test conducted on an auction-based recommender system which handles billions of impressions and users daily, we empirically establish that our proposed method outperforms the current production system in terms of long-term user engagement metrics.

Cite

CITATION STYLE

APA

Xu, R., Bhandari, J., Korenkevych, D., Liu, F., He, Y., Nikulkov, A., & Zhu, Z. (2023). Optimizing Long-term Value for Auction-Based Recommender Systems via On-Policy Reinforcement Learning. In Proceedings of the 17th ACM Conference on Recommender Systems, RecSys 2023 (pp. 955–962). Association for Computing Machinery, Inc. https://doi.org/10.1145/3604915.3608854

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free