Deriving reliable change statistics from test-retest normative data: Comparison of models and mathematical expressions

88Citations
Citations of this article
98Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The use of reliable change (RC) statistics to determine whether an individual has significantly improved or deteriorated on retesting is growing rapidly in clinical neuropsychology. This paper demonstrates how with only basic test-retest data and a series of simple expressions, the clinician/researcher can implement the majority of contemporary RC model(s). Though sharing a fundamental structure, RC models vary in how they derive predicted retest scores and standard error terms. Published test-retest normative data and a simple case study are presented to demonstrate how to calculate several well-known RC scores. The paper highlights the circumstances under which models will diverge in the estimation of RC. Most importantly variations in individual's performance relative to controls at initial testing, practice effects, inequality of control variability from test to retest, and degree of reliability will see systematic and predictable disagreement among models. More generally, the limitations and opportunities of RC methodology were discussed. Although a consensus on preferred model continues to be debated, the comparison of RC models in clinical samples is encouraged. © The Author 2010.

Cite

CITATION STYLE

APA

Hinton-Bayre, A. D. (2010). Deriving reliable change statistics from test-retest normative data: Comparison of models and mathematical expressions. Archives of Clinical Neuropsychology. Oxford University Press. https://doi.org/10.1093/arclin/acq008

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free