Context attentive bandits: Contextual bandit with restricted context

32Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

We consider a novel formulation of the multiarmed bandit model, which we call the contextual bandit with restricted context, where only a limited number of features can be accessed by the learner at every iteration. This novel formulation is motivated by different online problems arising in clinical trials, recommender systems and attention modeling. Herein, we adapt the standard multi-armed bandit algorithm known as Thompson Sampling to take advantage of our restricted context setting, and propose two novel algorithms, called the Thompson Sampling with Restricted Context (TSRC) and the Windows Thompson Sampling with Restricted Context (WTSRC), for handling stationary and nonstationary environments, respectively. Our empirical results demonstrate advantages of the proposed approaches on several real-life datasets.

Cite

CITATION STYLE

APA

Bouneffouf, D., Rish, I., Cecchi, G. A., & Féraud, R. (2017). Context attentive bandits: Contextual bandit with restricted context. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 0, pp. 1468–1475). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/203

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free