Thompson Sampling for Bandits with Clustered Arms

4Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

We propose algorithms based on a multi-level Thompson sampling scheme, for the stochastic multi-armed bandit and its contextual variant with linear expected rewards, in the setting where arms are clustered. We show, both theoretically and empirically, how exploiting a given cluster structure can significantly improve the regret and computational cost compared to using standard Thompson sampling. In the case of the stochastic multi-armed bandit we give upper bounds on the expected cumulative regret showing how it depends on the quality of the clustering. Finally, we perform an empirical evaluation showing that our algorithms perform well compared to previously proposed algorithms for bandits with clustered arms.

Cite

CITATION STYLE

APA

Carlsson, E., Dubhashi, D., & Johansson, F. D. (2021). Thompson Sampling for Bandits with Clustered Arms. In IJCAI International Joint Conference on Artificial Intelligence (pp. 2212–2218). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/305

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free