This study examined variations of the nonequivalent-groups equating design for mixed-format tests—tests containing both multiple-choice (MC) and constructed-response (CR) items—to determine which design was most effective in producing equivalent scores across the two tests to be equated. Four linking designs were examined: (a) an anchor with only MC items; (b) a mixed-format anchor containing both MC and CR items; (c) a mixed-format anchor incorporating CR item rescoring; and (d) a hybrid combining single-group and equivalent-groups designs, thereby avoiding the need for an anchor test. Designs using MC items alone or those using a mixed anchor without CR item rescoring resulted in much larger bias than the other two design approaches. The hybrid design yielded the smallest root mean squared error value.
CITATION STYLE
Kim, S., Walker, M. E., & McHale, F. (2008). EQUATING OF MIXED-FORMAT TESTS IN LARGE-SCALE ASSESSMENTS. ETS Research Report Series, 2008(1), i–26. https://doi.org/10.1002/j.2333-8504.2008.tb02112.x
Mendeley helps you to discover research relevant for your work.