Combinatorial Pure Exploration with Full-Bandit or Partial Linear Feedback

15Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we first study the problem of combinatorial pure exploration with full-bandit feedback (CPE-BL), where a learner is given a combinatorial action space X ⊆ {0, 1}d, and in each round the learner pulls an action x ∈ X and receives a random reward with expectation x⊤θ, with θ ∈ Rd a latent and unknown environment vector. The objective is to identify the optimal action with the highest expected reward, using as few samples as possible. For CPE-BL, we design the first polynomial-time adaptive algorithm, whose sample complexity matches the lower bound (within a logarithmic factor) for a family of instances and has a light dependence of ∆min (the smallest gap between the optimal action and sub-optimal actions). Furthermore, we propose a novel generalization of CPE-BL with flexible feedback structures, called combinatorial pure exploration with partial linear feedback (CPE-PL), which encompasses several families of sub-problems including full-bandit feedback, semi-bandit feedback, partial feedback and nonlinear reward functions. In CPE-PL, each pull of action x reports a random feedback vector with expectation of Mxθ, where Mx ∈ Rmx×d is a transformation matrix for x, and gains a random (possibly nonlinear) reward related to x. For CPE-PL, we develop the first polynomial-time algorithm, which simultaneously addresses limited feedback, general reward function and combinatorial action space (e.g., matroids, matchings and s-t paths), and provide its sample complexity analysis. Our empirical evaluation demonstrates that our algorithms run orders of magnitude faster than the existing ones, and our CPE-BL algorithm is robust across different ∆min settings while our CPE-PL algorithm is the first one returning correct answers for nonlinear reward functions.

Cite

CITATION STYLE

APA

Du, Y., Kuroki, Y., & Chen, W. (2021). Combinatorial Pure Exploration with Full-Bandit or Partial Linear Feedback. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 8B, pp. 7262–7270). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i8.16892

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free