Score It All Together: A Multi-Task Learning Study on Automatic Scoring of Argumentative Essays

10Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

When scoring argumentative essays in an educational context, not only the presence or absence of certain argumentative elements but also their quality is important. On the recently published student essay dataset PERSUADE, we first show that the automatic scoring of argument quality benefits from additional information about context, writing prompt and argument type. We then explore the different combinations of three tasks: automated span detection, type and quality prediction. Results show that a multi-task learning approach combining the three tasks outperforms sequential approaches that first learn to segment and then predict the quality/type of a segment.

Cite

CITATION STYLE

APA

Ding, Y., Bexte, M., & Horbach, A. (2023). Score It All Together: A Multi-Task Learning Study on Automatic Scoring of Argumentative Essays. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 13052–13063). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.825

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free