Automatic item generation unleashed: an evaluation of a large-scale deployment of item models

9Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Automatic item generation represents a potential solution to the increased item development demands in this era of continuous testing. However, the use of test items that are automatically generated on-the-fly poses significant psychometric challenges for item calibration. The solution that has been suggested by a small but growing number of authors is to replace item calibration with item model (or family) calibration and to adopt a multilevel approach where items are nested within item models. Past research on the feasibility of this approach was limited to simulations or small-scale illustrations of its potential. The purpose of this study was to evaluate the results of a large-scale deployment of automatic item generation in a low-stakes adaptive testing context, with a large number of item models, and a very large number of randomly generated item instances.

Cite

CITATION STYLE

APA

Attali, Y. (2018). Automatic item generation unleashed: an evaluation of a large-scale deployment of item models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10947 LNAI, pp. 17–29). Springer Verlag. https://doi.org/10.1007/978-3-319-93843-1_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free