Measurement Error and Bias in Value‐Added Models

  • Kane M
N/ACitations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

By aggregating residual gain scores (the differences between each student's current score and a predicted score based on prior performance) for a school or a teacher, value‐added models ( VAMs ) can be used to generate estimates of school or teacher effects. It is known that random errors in the prior scores will introduce bias into predictions of the current scores, and thereby, into the estimated residual gain scores and VAM scores. The analyses in this paper examine the origins of this bias and its potential impact and indicate that the bias is an increasing linear function of the student's prior achievement and can be quite large (e.g., half a true‐score standard deviation) for very low‐scoring and high‐scoring students. To the extent that students with relatively low or high prior scores are clustered in particular classes and schools, the student‐level bias will tend to generate bias in VAM estimates of teacher and school effects. Adjusting for this bias is possible, but it requires estimates of generalizability (or reliability) coefficients that are more accurate and precise than those that are generally available for standardized achievement tests. Report Number: ETS RR‐17–25

Cite

CITATION STYLE

APA

Kane, M. T. (2017). Measurement Error and Bias in Value‐Added Models. ETS Research Report Series, 2017(1), 1–12. https://doi.org/10.1002/ets2.12153

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free