Recently, artificial intelligence (AI)-programmed automated writing evaluation (AWE) has attracted increasing attention in language research. Using a small data set arising from an analysis of five Chinese university-level English as a foreign language (EFL) students’ submissions, this paper examined in detail how EFL students interacted with the feedback of Pigai, the largest AI-programmed AWE in China. The analysis started with the intention of capturing the machine feedback on the five students’ submissions and the exchanges between the participants and Pigai over repeated submissions, ranging from 3 to 12 submissions. The analysis showed that the learners’ interactions with Pigai focused on error corrective feedback in the initial two submissions. In the case of one student who had 12 submissions, the non-error corrective feedback increased gradually over time, providing rich linguistic resources but without examples and contextual information. The students’ take-up rates of feedback with linguistic resources were much lower than that of error corrective and general feedback. A terrain model to map the stages and nature of student responses showed a more complete dynamic process, in which students’ responses changed from the initial mechanical responses at the discrete language level to more considered approaches in response to machine feedback. The findings of this study have implications for both language pedagogy and the future design and development of AWE for second or foreign language learning.
CITATION STYLE
Yang, H., Gao, C., & Shen, H. Z. (2024). Learner interaction with, and response to, AI-programmed automated writing evaluation feedback in EFL writing: An exploratory study. Education and Information Technologies, 29(4), 3837–3858. https://doi.org/10.1007/s10639-023-11991-3
Mendeley helps you to discover research relevant for your work.