Abstract
Objectively scoring constructed-response items on educational assessments has long been a challenge due to the use of human raters. Even well-trained raters using a rubric can inaccurately assess essays. Unfolding models measure rater’s scoring accuracy by capturing the discrepancy between criterion and operational ratings by placing essays on an unfolding continuum with an ideal-point location. Essay unfolding locations indicate how difficult it is for raters to score an essay accurately. This study aims to explore a substantive interpretation of the unfolding scale based on a supervised Latent Dirichlet Allocation (sLDA) model. We investigate the relationship between latent topics extracted using sLDA and unfolding locations with a sample of essays (n = 100) obtained from an integrated writing assessment. Results show that (a) three latent topics moderately explain (r 2 = 0.561) essay locations defined by the unfolding scale and (b) failing to use and/or cite the source articles led to essays that are difficult-to-score accurately.
Author supplied keywords
Cite
CITATION STYLE
Wheeler, J. M., Engelhard, G., & Wang, J. (2022). Exploring Rater Accuracy Using Unfolding Models Combined with Topic Models: Incorporating Supervised Latent Dirichlet Allocation. Measurement, 20(1), 34–46. https://doi.org/10.1080/15366367.2021.1915094
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.