An extension of the traditional two-armed bandit problem is considered, in which the decision maker has access to some side information before deciding which arm to pull. At each time t, before making a selection, the decision maker is able to observe a random variable, Xt, that provides some information on the rewards to be obtained. The focus is on finding uniformly good rules (that minimize the growth rate of the regret) and on quantifying how much the additional information helps. Various settings are considered and asymptotically tight lower bounds on the achievable regret are provided.
CITATION STYLE
Wang, C. C., Kulkarni, S. R., & Poor, H. V. (2002). Bandit problems with side observations. In Proceedings of the IEEE Conference on Decision and Control (Vol. 4, pp. 3988–3993). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1007/978-1-4899-7687-1_100032
Mendeley helps you to discover research relevant for your work.