Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments

  • Archambault D
  • Purchase H
  • Hoßfeld T
N/ACitations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

Crowdsourcing enables new possibilities for QoE evaluation by moving the evaluation task from the traditional laboratory environment into the Internet, allowing researchers to easily access a global pool of workers for the evaluation task. This makes it not only possible to include a more diverse population and real-life environments into the evaluation, but also reduces the turn-around time and increases the number of subjects participating in an evaluation campaign significantly, thereby circumventing bottle-necks in traditional laboratory setups. In order to utilise these advantages, the differences between laboratory-based and crowd-based QoE evaluation are discussed in this chapter.

Cite

CITATION STYLE

APA

Archambault, D., Purchase, H., & Hoßfeld, T. (2017). Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10264(June 2018), 1–191. Retrieved from http://www.wired.com/2006/06/crowds%0Ahttp://www.springer.com/series/7409

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free