The non-stationary stochastic multi-armed bandit problem

43Citations
Citations of this article
113Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We consider a variant of the stochastic multi-armed bandit with K arms where the rewards are not assumed to be identically distributed, but are generated by a non-stationary stochastic process. We first study the unique best arm setting when there exists one unique best arm. Second, we study the general switching best arm setting when a best arm switches at some unknown steps. For both settings, we target problem-dependent bounds, instead of the more conservative problem-free bounds. We consider two classical problems: (1) identify a best arm with high probability (best arm identification), for which the performance measure by the sample complexity (number of samples before finding a near-optimal arm). To this end, we naturally extend the definition of sample complexity so that it makes sense in the switching best arm setting, which may be of independent interest. (2) Achieve the smallest cumulative regret (regret minimization) where the regret is measured with respect to the strategy pulling an arm with the best instantaneous mean at each step.

Cite

CITATION STYLE

APA

Allesiardo, R., Féraud, R., & Maillard, O. A. (2017). The non-stationary stochastic multi-armed bandit problem. International Journal of Data Science and Analytics, 3(4), 267–283. https://doi.org/10.1007/s41060-017-0050-5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free