Validity and Automated Scoring

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This chapter concerns the automated scoring of answers to constructed-response items as seen through the lens of validity, and shows how to do automated scoring with validity considerations at the forefront. Automated scoring should be designed as part of a construct-driven, integrated system because the interplay among system components is complex and that complexity must be accounted for in scoring-program design and validation. For automated scoring, as for score meaning generally, the validity argument should rest on an integrated base of theory and data that allows a comprehensive analysis of how effectively scores represent the construct of interest and how resistant they are to sources of irrelevant variance. The automated scoring technology is continuously advancing, with a level of sophistication and complexity far greater than that employed in field. Seven assertions were offered about validity and automated essay scoring. One of the most critical of those assertions was that validation should be broad based.

Cite

CITATION STYLE

APA

Bennett, R. E., & Zhang, M. (2015). Validity and Automated Scoring. In Technology and Testing: Improving Educational and Psychological Measurement (pp. 142–173). Taylor and Francis. https://doi.org/10.4324/9781315871493-8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free