Evaluating Screening Procedures Across Changes to the Statewide Achievement Test

6Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Several states have changed their statewide achievement tests over the past 5 years. These changes may pose difficulties for educators tasked with identifying students in need of additional support. This study evaluated the stability of decision-making accuracy estimates across changes to the statewide achievement test. We analyzed extant data from a large suburban district in Wisconsin in 2014–2015 (N = 2,774) and 2015–2016 (N = 2,882). We estimated the decision-making accuracy of recommendations from the Measures of Academic Progress for predicting risk on a Common Core State Standards aligned test (2014–2015) and a new test based on updated academic standards (2015–2016) in reading and math. Findings suggest that sensitivity and specificity estimates were relatively stable in math. Changes in the criterion measure were associated with decreased sensitivity when predicting performance in reading. These results provide initial support for educators to continue existing screening practices until test vendors or state educational agencies establish cut-scores for predicting risk on the newer test. Using a lower cut-score to establish risk (increasing sensitivity while decreasing specificity) may be prudent in reading. Limitations and directions for future research are discussed.

Cite

CITATION STYLE

APA

Klingbeil, D. A., Van Norman, E. R., Nelson, P. M., & Birr, C. (2018). Evaluating Screening Procedures Across Changes to the Statewide Achievement Test. Assessment for Effective Intervention, 44(1), 17–31. https://doi.org/10.1177/1534508417747390

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free