Automated essay scoring: Psychometric guidelines and practices

64Citations
Citations of this article
127Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we provide an overview of psychometric procedures and guidelines Educational Testing Service (ETS) uses to evaluate automated essay scoring for operational use. We briefly describe the e-rater system, the procedures and criteria used to evaluate e-rater, implications for a range of potential uses of e-rater, and directions for future research. The description of e-rater includes a summary of characteristics of writing covered by e-rater, variations in modeling techniques available, and the regression-based model building procedure. The evaluation procedures cover multiple criteria, including association with human scores, distributional differences, subgroup differences and association with external variables of interest. Expected levels of performance for each evaluation are provided. We conclude that the a priori establishment of performance expectations and the evaluation of performance of e-rater against these expectations help to ensure that automated scoring provides a positive contribution to the large-scale assessment of writing. We call for continuing transparency in the design of automated scoring systems and clear and consistent expectations of performance of automated scoring before using such systems operationally. © 2012.

Cite

CITATION STYLE

APA

Ramineni, C., & Williamson, D. M. (2013). Automated essay scoring: Psychometric guidelines and practices. Assessing Writing, 18(1), 25–39. https://doi.org/10.1016/j.asw.2012.10.004

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free