We used data simulations to test whether composites consisting of cohesive subtest scores are more accurate than composites consisting of divergent subtest scores. We demonstrate that when multivariate normality holds, divergent and cohesive scores are equally accurate. Furthermore, excluding divergent scores results in biased estimates of construct scores. We show that obtaining divergent scores should prompt additional testing under some conditions. Although there are many valid reasons to exclude scores from consideration (e.g., malingering, fatigue, and misunderstood directions), no score should be removed from a composite simply because it is different from other scores in the composite.
CITATION STYLE
Schneider, W. J., & Roman, Z. (2018). Fine-Tuning Cross-Battery Assessment Procedures: After Follow-Up Testing, Use All Valid Scores, Cohesive or Not. Journal of Psychoeducational Assessment, 36(1), 34–54. https://doi.org/10.1177/0734282917722861
Mendeley helps you to discover research relevant for your work.