Thompson sampling for stochastic bandits with graph feedback

24Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

Abstract

We present a novel extension of Thompson Sampling for stochastic sequential decision problems with graph feedback, even when the graph structure itself is unknown and/or changing. We provide theoretical guarantees on the Bayesian regret of the algorithm, linking its performance to the underlying properties of the graph. Thompson Sampling has the advantage of being applicable without the need to construct complicated upper confidence bounds for different problems. We illustrate its performance through extensive experimental results on real and simulated networks with graph feedback. More specifically, we tested our algorithms on power law, planted partitions and Erdos-Rényi graphs, as well as on graphs derived from Facebook and Flixster data. These all show that our algorithms clearly outperform related methods that employ upper confidence bounds, even if the latter use more information about the graph.

Cite

CITATION STYLE

APA

Tossou, A. C. Y., Dimitrakakis, C., & Dubhashi, D. (2017). Thompson sampling for stochastic bandits with graph feedback. In 31st AAAI Conference on Artificial Intelligence, AAAI 2017 (pp. 2660–2666). AAAI press. https://doi.org/10.1609/aaai.v31i1.10897

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free