Latent-Class-Based Item Selection for Computerized Adaptive Progress Tests

  • van Buuren N
  • Eggen T
N/ACitations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Standard computerized adaptive testing (CAT) methods require an underlying item response theory (IRT) model. An item bank can be constructed from the IRT model, and subsequent items can be selected with maximum information at the examinee’s estimated ability level. IRT models, however, do not always fit test data exactly. In such situations, it is not possible to employ standard CAT methods without violating assumptions. To extend the scope of adaptive testing, this research shows how latent class analysis (LCA) can be used in item bank construction. In addition, the research investigates suitable item selection algorithms using Kullback-Leibler (KL) information for item banks based on LCA. The KL information values can be used to select items and to construct an adaptive test. Simulations show that item selection based on KL information outperformed random selection of items in progress testing. The effectiveness of the selection algorithm is evaluated, and a possible scoring for the new adaptive item selection with two classes is proposed. The applicability of the methods is illustrated by constructing a computerized adaptive progress test (CAPT) on an example data set drawn from the Dutch Medical Progress Test.

Cite

CITATION STYLE

APA

van Buuren, N., & Eggen, T. (2017). Latent-Class-Based Item Selection for Computerized Adaptive Progress Tests. Journal of Computerized Adaptive Testing, 5(2), 22–43. https://doi.org/10.7333/1704-0502022

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free