EQUATING OF MIXED-FORMAT TESTS IN LARGE-SCALE ASSESSMENTS

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This study examined variations of the nonequivalent-groups equating design for mixed-format tests—tests containing both multiple-choice (MC) and constructed-response (CR) items—to determine which design was most effective in producing equivalent scores across the two tests to be equated. Four linking designs were examined: (a) an anchor with only MC items; (b) a mixed-format anchor containing both MC and CR items; (c) a mixed-format anchor incorporating CR item rescoring; and (d) a hybrid combining single-group and equivalent-groups designs, thereby avoiding the need for an anchor test. Designs using MC items alone or those using a mixed anchor without CR item rescoring resulted in much larger bias than the other two design approaches. The hybrid design yielded the smallest root mean squared error value.

Cite

CITATION STYLE

APA

Kim, S., Walker, M. E., & McHale, F. (2008). EQUATING OF MIXED-FORMAT TESTS IN LARGE-SCALE ASSESSMENTS. ETS Research Report Series, 2008(1), i–26. https://doi.org/10.1002/j.2333-8504.2008.tb02112.x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free