Budget-constrained multi-armed bandits with multiple plays

45Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

We study the multi-armed bandit problem with multiple plays and a budget constraint for both the stochastic and the adversarial setting. At each round, exactly K out of N possible arms have to be played (with 1 ≤ K ≤ N). In addition to observing the individual rewards for each arm played, the player also learns a vector of costs which has to be covered with an a-priori defined budget B. The game ends when the sum of current costs associated with the played arms exceeds the remaining budget. Firstly, we analyze this setting for the stochastic case, for which we assume each arm to have an underlying cost and reward distribution with support [cmin, 1] and [0, 1], respectively. We derive an Upper Confidence Bound (UCB) algorithm which achieves O(NK 4 log B) regret. Secondly, for the adversarial case in which the entire sequence of rewards and costs is fixed in advance, we derive an upper bound on the regret of order O(NB log(N/K)) utilizing an extension of the well-known Exp3 algorithm. We also provide upper bounds that hold with high probability and a lower bound of order Ω((1 − K/N) 2 NB/K).

Cite

CITATION STYLE

APA

Zhou, D. P., & Tomlin, C. J. (2018). Budget-constrained multi-armed bandits with multiple plays. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 4572–4579). AAAI press. https://doi.org/10.1609/aaai.v32i1.11629

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free