A new item response theory model to adjust data allowing examinee choice

2Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

In a typical questionnaire testing situation, examinees are not allowed to choose which items they answer because of a technical issue in obtaining satisfactory statistical estimates of examinee ability and item difficulty. This paper introduces a new item response theory (IRT) model that incorporates information from a novel representation of questionnaire data using network analysis. Three scenarios in which examinees select a subset of items were simulated. In the first scenario, the assumptions required to apply the standard Rasch model are met, thus establishing a reference for parameter accuracy. The second and third scenarios include five increasing levels of violating those assumptions. The results show substantial improvements over the standard model in item parameter recovery. Furthermore, the accuracy was closer to the reference in almost every evaluated scenario. To the best of our knowledge, this is the first proposal to obtain satisfactory IRT statistical estimates in the last two scenarios.

Cite

CITATION STYLE

APA

Pena, C. S., Costa, M. A., & Oliveira, R. P. B. (2018). A new item response theory model to adjust data allowing examinee choice. PLoS ONE, 13(2). https://doi.org/10.1371/journal.pone.0191600

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free