Abstract
In this paper test equating is considered as a missing data problem. The unobserved responses of the reference population to the new test must be imputed to specify a new cutscore. The proportion of students from the reference population that would have failed the new exam and those having failed the reference exam are made approximately the same. We investigate whether item response theory (IRT) makes it possible to identify the distribution of these missing responses and the distribution of test scores from the observed data without parametric assumptions for the ability distribution. We show that while the score distribution is not fully identifiable, the uncertainty about the score distribution on the new test due to non-identifiability is very small. Moreover, ignoring the non-identifiability issue and assuming a normal distribution for ability may lead to bias in test equating, which we illustrate in simulated and empirical data examples.
Author supplied keywords
Cite
CITATION STYLE
Bolsinova, M., & Maris, G. (2016). Can IRT solve the missing data problem in test equating? Frontiers in Psychology, 6(JAN). https://doi.org/10.3389/fpsyg.2015.01956
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.