Deep Bayesian Bandits: Exploring in Online Personalized Recommendations

32Citations
Citations of this article
94Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recommender systems trained in a continuous learning fashion are plagued by the feedback loop problem, also known as algorithmic bias. This causes a newly trained model to act greedily and favor items that have already been engaged by users. This behavior is particularly harmful in personalised ads recommendations, as it can also cause new campaigns to remain unexplored. Exploration aims to address this limitation by providing new information about the environment, which encompasses user preference, and can lead to higher long-term reward. In this work, we formulate a display advertising recommender as a contextual bandit and implement exploration techniques that require sampling from the posterior distribution of click-through-rates in a computationally tractable manner. Traditional large-scale deep learning models do not provide uncertainty estimates by default. We approximate these uncertainty measurements of the predictions by employing a bootstrapped model with multiple heads and dropout units. We benchmark a number of different models in an offline simulation environment using a publicly available dataset of user-ads engagements. We test our proposed deep Bayesian bandits algorithm in the offline simulation and online AB setting with large-scale production traffic, where we demonstrate a positive gain of our exploration model.

Cite

CITATION STYLE

APA

Guo, D., Ktena, S. I., Myana, P. K., Huszar, F., Shi, W., Tejani, A., … Das, S. (2020). Deep Bayesian Bandits: Exploring in Online Personalized Recommendations. In RecSys 2020 - 14th ACM Conference on Recommender Systems (pp. 456–461). Association for Computing Machinery, Inc. https://doi.org/10.1145/3383313.3412214

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free