Abstract
Computerized adaptive testing (CAT) is a modern alternative to classical paper and pencil testing. CAT is based on an automated selection of optimal item corresponding to current estimate of test-taker's ability, which is in contrast to fixed predefined items assigned in linear test. Advantages of CAT include lowered test anxiety and shortened test length, increased precision of estimates of test-takers' abilities, and lowered level of item exposure thus better security. Challenges are high technical demands on the whole test work-flow and need of large item banks. In this study, we analyze feasibility and advantages of computerized adaptive testing using a Monte-Carlo simulation and post-hoc analysis based on a real linear admission test administrated at a medical college. We compare various settings of the adaptive test in terms of precision of ability estimates and test length. We find out that with adaptive item selection, the test length can be reduced to 40 out of 100 items while keeping the precision of ability estimates within the prescribed range and obtaining ability estimates highly correlated to estimates based on complete linear test (Pearson's rho=0.96). We also demonstrate positive effect of content balancing and item exposure rate control on item composition.
Cite
CITATION STYLE
Stepanek, L., & Martinkova, P. (2020). Feasibility of computerized adaptive testing evaluated by Monte-Carlo and post-hoc simulations. In Proceedings of the 2020 Federated Conference on Computer Science and Information Systems, FedCSIS 2020 (pp. 359–367). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.15439/2020F197
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.