ReadME – Enhancing automated writing evaluation

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Writing is a central skill needed for learning that is tightly linked to text comprehension. Good writing skills are gained through practice and are characterized by clear and organized language, accurate grammar usage, strong text cohesion, and sophisticated wording. Providing constructive feedback can help learners improve their writing; however, providing feedback is a time-consuming process. The aim of this paper is to present an updated version of the tool ReadME, which generates automated and personalized feedback designed to help learners improve the quality of their writing. Sampling a corpus of over 15,000 essays, we used the ReaderBench framework to generate more than 1,200 textual complexity indices. These indices were then grouped into six writing components using a Principal Component Analysis. Based on the components generated by the PCA, as well as individual index values, we created an extensible rule-based engine to provide personalized feedback at four granularity levels: document, paragraph, sentence, and word levels. The ReadME tool consists of a multi-layered, interactive visualization interface capable of providing feedback to writers by highlighting sections of texts that may benefit from revision.

Cite

CITATION STYLE

APA

Sirbu, M. D., Botarleanu, R. M., Dascalu, M., Crossley, S. A., & Trausan-Matu, S. (2018). ReadME – Enhancing automated writing evaluation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11089 LNAI, pp. 281–285). Springer Verlag. https://doi.org/10.1007/978-3-319-99344-7_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free