Automated versus human essay scoring: A comparative study

6Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

Abstract

This study investigated the effects of automated essay scoring (AES) system on writing improvement of Iranian L2 learners. About 60 Iranian intermediate EFL learners were selected on a Standard English proficiency test (Allen 2004). Afterwards, they were randomly assigned to two groups of 30, experimental and control group. Participants in experimental group received the AES scoring, and control group, received the human scoring. Statistical analyses of the results reveal that 1) AES tool results in significant improvement of L2 learners writing achievement, 2) Results from questionnaire show that Students ware favor about using AES tool, 3) The results from the current study support the conclusion that the AES tool does not seem to correlate well with human raters in scoring essays. Hence, the findings of this study indicate that using AES tools can help teachers ease their big teaching students to improve their writing and it can be used as an educational tool on classrooms. © 2012 ACADEMY PUBLISHER Manufactured in Finland.

Cite

CITATION STYLE

APA

Toranj, S., & Ansari, D. N. (2012). Automated versus human essay scoring: A comparative study. Theory and Practice in Language Studies, 2(4), 719–725. https://doi.org/10.4304/tpls.2.4.719-725

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free