Efficient crowdsourcing of unknown experts using bounded multi-armed bandits

  • Tran-Thanh L
  • Stein S
  • Rogers A
 et al. 
  • 99

    Readers

    Mendeley users who have this article in their library.
  • 30

    Citations

    Citations of this article.

Abstract

Increasingly, organisations flexibly outsource work on a temporary basis to a global audience of workers. This so-called crowdsourcing has been applied successfully to a range of tasks, from translating text and annotating images, to collecting information during crisis situations and hiring skilled workers to build complex software. While traditionally these tasks have been small and could be completed by non-professionals, organisations are now starting to crowdsource larger, more complex tasks to experts in their respective fields. These tasks include, for example, software development and testing, web design and product marketing. While this emerging expert crowdsourcing offers flexibility and potentially lower costs, it also raises new challenges, as workers can be highly heterogeneous, both in their costs and in the quality of the work they produce. Specifically, the utility of each outsourced task is uncertain and can vary significantly between distinct workers and even between subsequent tasks assigned to the same worker. Furthermore, in realistic settings, workers have limits on the amount of work they can perform and the employer will have a fixed budget for paying workers. Given this uncertainty and the relevant constraints, the objective of the employer is to assign tasks to workers in order to maximise the overall utility achieved. To formalise this expert crowdsourcing problem, we introduce a novel multi-armed bandit (MAB) model, the bounded MAB. Furthermore, we develop an algorithm to solve it efficiently, called bounded ε-first, which proceeds in two stages: exploration and exploitation. During exploration, it first uses εB of its total budget B to learn estimates of the workers' quality characteristics. Then, during exploitation, it uses the remaining (1-ε)B to maximise the total utility based on those estimates. Using this technique allows us to derive an O(B23) upper bound on its performance regret (i.e., the expected difference in utility between our algorithm and the optimum), which means that as the budget B increases, the regret tends to 0. In addition to this theoretical advance, we apply our algorithm to real-world data from oDesk, a prominent expert crowdsourcing site. Using data from real projects, including historic project budgets, expert costs and quality ratings, we show that our algorithm outperforms existing crowdsourcing methods by up to 300%, while achieving up to 95% of a hypothetical optimum with full information. © 2014 Published by Elsevier B.V.

Author-supplied keywords

  • Budget limitation
  • Crowdsourcing
  • Machine learning
  • Multi-armed bandits

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Authors

  • Long Tran-Thanh

  • Sebastian Stein

  • Alex Rogers

  • Nicholas R. Jennings

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free