Automated scoring of writing

9Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

For decades, automated essay scoring (AES) has operated behind the scenes of major standardized writing assessments to provide summative scores of students' writing proficiency (Dikli in J Technol Learn Assess 5(1), 2006). Today, AES systems are increasingly used in low-stakes assessment contexts and as a component of instructional tools in writing classrooms. Despite substantial debate regarding their use, including concerns about writing construct representation (Condon in Assess Writ 18:100-108, 2013; Deane in Assess Writ 18:7-24, 2013), AES has attracted the attention of school administrators, educators, testing companies, and researchers and is now commonly used in an attempt to reduce human efforts and improve consistency issues in assessing writing (Ramesh and Sanampudi in Artif Intell Rev 55:2495-2527, 2021). This chapter introduces the affordances and constraints of AES for writing assessment, surveys research on AES effectiveness in classroom practice, and emphasizes implications for writing theory and practice.

Cite

CITATION STYLE

APA

Link, S., & Koltovskaia, S. (2023). Automated scoring of writing. In Digital Writing Technologies in Higher Education: Theory, Research, and Practice (pp. 333–345). Springer International Publishing. https://doi.org/10.1007/978-3-031-36033-6_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free