Rater centrality, in which raters overuse middle scores for rating, is a common rater error which can affect test scores and subsequent decisions. Past studies on rater errors have focused on rater severity and inconsistency, neglecting rater centrality. This study proposes a new model within the hierarchical rater model framework to explicitly specify and directly estimate rater centrality in addition to rater severity and inconsistency. Simulations were conducted using the freeware JAGS to evaluate the parameter recovery of the new model and the consequences of ignoring rater centrality. The results revealed that the model had good parameter recovery with small bias, low root mean square errors, and high test score reliability, especially when a fully crossed linking design was used. Ignoring centrality yielded poor item difficulty estimates, person ability estimates, rater errors estimates, and underestimated reliability. We also showcase how the new model can be used, using an empirical example involving English essays in the Advanced Placement exam.
CITATION STYLE
Qiu, X. L., Chiu, M. M., Wang, W. C., & Chen, P. H. (2022). A new item response theory model for rater centrality using a hierarchical rater model approach. Behavior Research Methods, 54(4), 1854–1868. https://doi.org/10.3758/s13428-021-01699-y
Mendeley helps you to discover research relevant for your work.