Learning to Rank in the Position Based Model with Bandit Feedback

13Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Personalization is a crucial aspect of many online experiences. In particular, content ranking is often a key component in delivering sophisticated personalization results. Commonly, supervised learning-to-rank methods are applied, which suffer from bias introduced during data collection by production systems in charge of producing the ranking. To compensate for this problem, we leverage contextual multi-armed bandits. We propose novel extensions of two well-known algorithms viz. LinUCB and Linear Thompson Sampling to the ranking use-case. To account for the biases in a production environment, we employ the position-based click model. Finally, we show the validity of the proposed algorithms by conducting extensive offline experiments on synthetic datasets as well as customer facing online A/B experiments.

Cite

CITATION STYLE

APA

Ermis, B., Ernst, P., Stein, Y., & Zappella, G. (2020). Learning to Rank in the Position Based Model with Bandit Feedback. In International Conference on Information and Knowledge Management, Proceedings (pp. 2405–2412). Association for Computing Machinery. https://doi.org/10.1145/3340531.3412723

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free