Approximate indexability and bandit problems with concave rewards and delayed feedback

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider two stochastic multi-armed bandit problems in this paper in the Bayesian setting. In the first problem the accrued reward in a step is a concave function (such as the maximum) of the observed values of the arms played in that step. In the second problem, the observed value from a play of arm i is revealed after δi steps. Both of these problems have been considered in the bandit literature but no solutions with provably good performance guarantees are known over short horizons. The two problems are similar in the sense that the reward (for the first) or the available information (for the second) derived from an arm is not a function of just the current play of that arm. This interdependence between arms renders most existing analysis techniques inapplicable. A fundamental question in regard to optimization in the bandit setting is indexability, i.e., the existence of near optimal index policies. Index policies in these contexts correspond to policies over suitable single arm state spaces which are combined into a global policy. They are extremely desirable due to their simplicity and perceived robustness, but standard index policies do not provide any guarantees for these two problems. We construct O(1) approximate (near) index policies in polynomial time for both problems. The analysis identifies a suitable subset of states for each arm such that index policies that focus on only those subsets are O(1)-approximate. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Guha, S., & Munagala, K. (2013). Approximate indexability and bandit problems with concave rewards and delayed feedback. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8096 LNCS, pp. 189–204). https://doi.org/10.1007/978-3-642-40328-6_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free