One of the primary goals in cognitive diagnosis is to use the item responses from a cognitive diagnostic assessment to make inferences about what skills a test-taker has. Much of the research to date has focused on parametric inference in cognitive diagnosis models (CDMs), which requires that the parametric model used for inference does an adequate job of describing the item response distribution of the population of examinees being studied. Whatever the type of model misspecification or misfit, users of CDMs need tools to investigate model-data misfit from a variety of angles. In this chapter we separate the model fit methods into four categories defined by two aspects of the methods: (1) the level of the fit analysis, i.e., global/test-level versus item-level; and (2) the choice of the alternative model for comparison, i.e., an alternative CDM (relative fit) or a saturated categorical model (absolute fit).
CITATION STYLE
Han, Z., & Johnson, M. S. (2019). Global- and Item-Level Model Fit Indices. In Methodology of Educational Measurement and Assessment (pp. 265–285). Springer Nature. https://doi.org/10.1007/978-3-030-05584-4_13
Mendeley helps you to discover research relevant for your work.