Evaluations of Automated Scoring Systems in Practice

3Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This research report provides a description of the processes of evaluating the “deployability” of automated scoring (AS) systems from the perspective of large-scale educational assessments in operational settings. It discusses a comprehensive psychometric evaluation that entails analyses that take into consideration the specific purpose of AS, the test design, the quality of human scores, the data collection design needed to train and evaluate the AS model, and the application of statistics and evaluation criteria. Finally, it notes that an effective evaluation of an AS system requires professional judgment coupled with statistical and psychometric knowledge and understanding of the risk assessment and business metrics.

Cite

CITATION STYLE

APA

Rotou, O., & Rupp, A. A. (2020). Evaluations of Automated Scoring Systems in Practice. ETS Research Report Series, 2020(1), 1–18. https://doi.org/10.1002/ets2.12293

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free