We conducted a simulation study to explore the precision of test outcomes across computerized adaptive testing (CAT) and computerized adaptive multistage testing (ca-MST) when the number of different content areas was varied across a variety of test lengths. We compared one CAT and two ca-MST designs (1-3 and 1-3-3 panel designs) across several manipulated conditions including total test length (24-item and 48-item test length) and number of controlled content areas. The five levels of the content area condition included zero (no content control), two, four, six and eight content area. We fully crossed all manipulated conditions within CAT and ca-MST with one another, and generated 4000 examinees from N (0,1). We fixed all other conditions such as IRT model, exposure rate across the CAT and ca-MSTs. Results indicated that test length and the type of test administration model impacted the outcomes more than the number of content area. The main finding was that regardless of any study condition, CAT outperformed the two ca-MSTs, and the two ca-MSTs were comparable. We discussed the results in connection to the control over test design, test content, cost effectiveness and item pool usage and provided recommendations for practitioner and also listed limitations for further research.
CITATION STYLE
Sari, H. İ., & Huggins-Manley, A. C. (2017). Examining content control in adaptive tests: Computerized adaptive testing vs. computerized adaptive multistage testing. Kuram ve Uygulamada Egitim Bilimleri, 17(5), 1759–1781. https://doi.org/10.12738/estp.2017.5.0484
Mendeley helps you to discover research relevant for your work.