Using procedure based on item response theory to evaluate classification consistency indices in the practice of large-scale assessment

1Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

In spite of the growing interest in the methods of evaluating the classification consistency (CC) indices, only few researches are available in the field of applying these methods in the practice of large-scale educational assessment. In addition, only few studies considered the influence of practical factors, for example, the examinee ability distribution, the cut score location and the score scale, on the performance of CC indices. Using the newly developed Lee's procedure based on the item response theory (IRT), the main purpose of this study is to investigate the performance of CC indices when practical factors are taken into consideration. A simulation study and an empirical study were conducted under comprehensive conditions. Results suggested that with negatively skewed distribution, the CC indices were larger than with other distributions. Interactions occurred among ability distribution, cut score location, and score scale. Consequently, Lee's IRT procedure is reliable to be used in the field of large-scale educational assessment, and when reporting the indices, it should be treated with caution as testing conditions may vary a lot.

Cite

CITATION STYLE

APA

Zhang, S., Du, J., Chen, P., Xin, T., & Chen, F. (2017). Using procedure based on item response theory to evaluate classification consistency indices in the practice of large-scale assessment. Frontiers in Psychology, 8(SEP). https://doi.org/10.3389/fpsyg.2017.01676

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free