Resumen La tecnología para la evaluación de la escritura se ha desarrollado desde los años 60. Actualmente, el procesamiento del lenguaje natural (Shermis, 2020) ha permitido una evolución considerable. No obstante, lo fecundo de este campo, no se han encontrado revisiones sistemáticas que abordan las cuestiones siguientes: ¿en qué país, género y niveles se han desarrollado propuestas para evaluar la calidad de la escritura?, ¿cuáles son las consideraciones didácticas, tecnológicas y teóricas de estas herramientas?, ¿cuál es el rol que desempeñan los docentes en el diseño y uso de ellas? y ¿qué resultados se han obtenido? En este artículo, se revisaron 164 investigaciones entre los años 1966 y 2022. De ello se destaca: a) que la evaluación automática ha pasado de focalizarse en puntuaciones fiables, imparciales y rápidas a una evaluación centrada en la retroalimentación, b) que el docente cumple un rol primordial en el diseño y uso de las herramientas y c) que las herramientas son un apoyo útil. Además, se identifica un escaso desarrollo de herramientas para la lengua española. Palabras clave: Evaluación, evaluación formativa, retroalimentación, aplicación informática, escritura. Abstract Technology for grading and evaluating written texts has been developed since the early 1960s in English-speaking countries and, preferably, in the essay genre (Chen and Cheng, 2008; Page, 2003). The first systems focused on delivering a rating of the writing evolved, thanks to natural language processing and artificial intelligence, reaching the point of providing feedback for different discursive genres (Shermis, 2020). Despite these and other advances, these techniques still have detractors who claim that this type of evaluation would seek to replace teaching tasks, would not resemble that performed by humans or that it would only be concerned by formal aspects (Vajjala, 2018). However, many of these concerns are based on a lack of knowledge about the evolution of this field, the fear of human impersonation (Palermo and Wilson, 2020) and ignorance of the new paradigms of this type of evaluation (Shermis, 2020). Despite the fecundity of this field, to date, no systematic reviews have been found that focus on the aforementioned points, so in this study we will focus on answering the following questions: What are the didactic, technological, and theoretical considerations of the tools for assessing the quality of writing? What is the role of teachers or evaluators in the design and use of these tools? For what purposes are the tools designed and used? What are the results obtained? In what country, language, genre, and levels have tools for assessing writing quality been developed? To answer these questions, a systematic review was developed, following the guidelines for Systematic Reviews and Meta-Analyses (PRISMA, 2020). Thus, 164 scientific research articles developed between 1966 and 2022 were selected. Among the results, it is observed that the initial objectives of automatic evaluation have shifted from focusing on reliable, unbiased, and rapid scores to focus on formative evaluation centered on feedback. At the same time, it was found that the role of the teacher is paramount in 94% of the works reviewed, since the tools are presented as a support and not as a replacement of the teaching work. Based on these results, we argue that this type of review can be of great help to learn about the current state of
CITATION STYLE
Lillo Fuentes, F. (2023). Evaluación automatizada y semiautomatizada de la calidad de textos escritos: una revisión sistemática. Perspectiva Educacional, 62(2). https://doi.org/10.4151/07189729-vol.62-iss.2-art.1420
Mendeley helps you to discover research relevant for your work.