Measuring learning in serious games: A case study with structural assessment

41Citations
Citations of this article
193Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The effectiveness of serious games is often measured with verbal assessment. As an alternative we propose Pathfinder structural assessment (defined as measuring the learners' knowledge organization and compare this with a referent structure) which comprises three steps: knowledge elicitation, knowledge representation and knowledge evaluation. We discuss practical and theoretical considerations for the use of structural assessment and showcase its application with the game Code Red: Triage. Results suggest that structural assessment measures an individual's understanding of a domain at least differently from verbal assessment. While verbal assessment may provide a more nuanced picture regarding declarative and procedural knowledge, structural assessment may add an in-depth understanding of the concepts that are regarded important in a domain. In the Discussion we propose four guidelines to effectively use structural assessment in serious games: (1) Determine the appropriateness of the domain for structural assessment, (2) select an appropriate referent for the target group(s), (3) select the number of concepts needed for structural assessment, and (4) consider the analysis of the graphical knowledge representations to obtain in-depth information about the quality of the knowledge structures. © 2011 The Author(s).

Cite

CITATION STYLE

APA

Wouters, P., van der Spek, E. D., & van Oostendorp, H. (2011). Measuring learning in serious games: A case study with structural assessment. Educational Technology Research and Development, 59(6), 741–763. https://doi.org/10.1007/s11423-010-9183-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free