The effects of careless responding on the fit of confirmatory factor analysis and item response theory models

3Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

It is common to model responses to surveys within latent variable frameworks (e.g., item response theory [IRT], confirmatory factor analysis [CFA]) and use model fit indices to evaluate model–data congruence. Unfortunately, research shows that people occasionally engage in careless responding (CR) when completing online surveys. While CR has the potential to negatively impact model fit, this issue has not been systematically explored. To better understand the CR–fit linkage, two studies were conducted. In study 1, participants’ response behaviors were experimentally shaped and used to embed aspects of a comprehensive simulation (study 2) with empirically informed data. For this simulation, 144 unique conditions (which varied the sample size, number of items, CR prevalence, CR severity, and CR type), two latent variable models (IRT, CFA), and six model fit indices (χ2, RMSEA, SRMSR [CFA] and M2, RMSEA, SRMSR [IRT]), were examined. The results indicated that CR deteriorates model fit under most circumstances, though these effects are nuanced, variable, and contingent on many factors. These findings can be leveraged by researchers and practitioners to improve survey methods, obtain more accurate survey results, develop more precise theories, and enable more justifiable data-driven decisions.

Cite

CITATION STYLE

APA

Voss, N. M. (2024). The effects of careless responding on the fit of confirmatory factor analysis and item response theory models. Behavior Research Methods, 56(2), 577–599. https://doi.org/10.3758/s13428-023-02074-9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free