Human computation games lack established ways of balancing the difficulty of tasks or levels served to players, potentially contributing to their lowengagement rates. Traditional player rating systems have been suggested as a potential solution: Using them to rate both players and tasks could estimate player skill and task difficulty and fuel player-task matchmaking. However, neither the effect of difficulty balancing on engagement in human computation games nor the use of player rating systems for this purpose has been empirically tested. We therefore examined the engagement effects of using the Glicko-2 player rating system to order tasks in the human computation game Paradox. An online experiment (n=294) found that both matchmaking-based and pure difficulty-based ordering of tasks led to significantly more attempted and completed levels than random ordering. Additionally, both matchmaking and random ordering led to significantly more difficult tasks being completed than pure difficulty-based ordering. We conclude that poor balancing contributes to poor engagement in human computation games, and that player rating system-based difficulty rating may be a viable and efficient way of improving both.
CITATION STYLE
Sarkar, A., Williams, M., Deterding, S., & Cooper, S. (2017). Engagement effects of player rating system-based matchmaking for level ordering in human computation games. In ACM International Conference Proceeding Series (Vol. Part F130151). Association for Computing Machinery. https://doi.org/10.1145/3102071.3102093
Mendeley helps you to discover research relevant for your work.