Investigating the validity of using automated writing evaluation in EFL writing assessment

0Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This study aims to follow an argument-based approach to validation of using automated essay evaluation (AWE) system with the example of Pigai, a Chinese AWE program, in English as a Foreign Language (EFL) writing assessment in China. First, an interpretive argument was developed for its use in the course of College English. Second, three sub-studies were conducted to seek evidence of claims related to score evaluation, score generalization, score explanation, score extrapolation and feedback utilization. Major findings are: (1) Pigai yields scores that are accurate indicators of the quality of a test performance sample; (2) its scores are consistent across tasks in the same form; (3) its scoring features represent the construct of interest to some extent, yet problems of construct under-representation and construct-irrelevant features still exist; (4) its scores are consistent with teachers’ judgments of students’ writing ability; (5) its feedback has a positive impact on students’ development of writing ability, but to some extent. These results reveal that AWE can only be used as a supplement to human evaluation, but can never replace the latter.

Cite

CITATION STYLE

APA

Xu, Y. (2018). Investigating the validity of using automated writing evaluation in EFL writing assessment. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11284 LNCS, pp. 127–137). Springer Verlag. https://doi.org/10.1007/978-3-030-03580-8_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free