Alternative item selection strategies for improving test security in computerized adaptive testing of the algorithm

  • Suhardi I
N/ACitations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

One of the ability estimation methods that is widely applied to the Computerized Adaptive Testing (CAT) algorithm is the maximum likelihood estimation (MLE). However, the maximum likelihood method has the disadvantage of being unable to find a solution to the ability estimation of test-takers when the test takers’ scores do not have a pattern. If there are test takers who get either score of 0 or perfect score, then the abilities of test-takers are usually estimated using the step-size model. However, the step-size model often results in item exposure where certain items will appear more often than other items. This surely threatens the security of the test because items that often appear will be easier to recognize. This study tries to provide an alternative strategy by modifying the step-size model and randomizing the calculation results of the information function obtained. Based on the results of the study, it is found that alternative strategies for item selection can make more varied items appear to improve the security of tests on the CAT.

Cite

CITATION STYLE

APA

Suhardi, I. (2020). Alternative item selection strategies for improving test security in computerized adaptive testing of the algorithm. REID (Research and Evaluation in Education), 6(1), 32–40. https://doi.org/10.21831/reid.v6i1.30508

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free