The increasing diversity of students in contemporary classrooms and the concomitant increase in large-scale testing programs highlight the importance of developing writing assessment programs that are sensitive to the challenges of assessing diverse populations. To this end, this paper provides a framework for conducting consequential validity research on large-scale writing assessment programs. It illustrates this validity model through a series of instrumental case studies drawing on the research literature conducted on writing assessment programs in Canada. We derived the cases from a systematic review of the literature published between January 2000 and December 2012 that directly examined the consequences of large-scale writing assessment on writing instruction in Canadian schools. We also conducted a systematic review of the publicly available documentation published on Canadian provincial and territorial government websites that discussed the purposes and uses of their large-scale writing assessment programs. We argue that this model of constructing consequential validity research provides researchers, test developers, and test users with a clearer, more systematic approach to examining the effects of assessment on diverse populations of students. We also argue that this model will enable the development of stronger, more integrated validity arguments. © 2014 by the National Council of Teachers of English.
CITATION STYLE
Slomp Prof., D. H., Corrigan, J. A., & Sugimoto, T. (2014). A framework for using consequential validity evidence in evaluating large-scale writing assessments: A canadian study. Research in the Teaching of English, 48(3), 276–302. https://doi.org/10.58680/rte201424579
Mendeley helps you to discover research relevant for your work.