Automatic scoring of an analytical response-to-text assessment

20Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In analytical writing in response to text, students read a complex text and adopt an analytic stance in their writing about it. To evaluate this type of writing at scale, an automated approach for Response to Text Assessment (RTA) is needed. With the long-term goal of producing informative feedback for students and teachers, we design a new set of interpretable features that operationalize the Evidence rubric of RTA. When evaluated on a corpus of essays written by students in grades 4-6, our results show that our features outperform baselines based on well-performing features from other types of essay assessments. © 2014 Springer International Publishing Switzerland.

Cite

CITATION STYLE

APA

Rahimi, Z., Litman, D. J., Correnti, R., Matsumura, L. C., Wang, E., & Kisa, Z. (2014). Automatic scoring of an analytical response-to-text assessment. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8474 LNCS, pp. 601–610). Springer Verlag. https://doi.org/10.1007/978-3-319-07221-0_76

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free