To Revise or Not to Revise: Learning to Detect Improvable Claims for Argumentative Writing Support

7Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

Optimizing the phrasing of argumentative text is crucial in higher education and professional development. However, assessing whether and how the different claims in a text should be revised is a hard task, especially for novice writers. In this work, we explore the main challenges to identifying argumentative claims in need of specific revisions. By learning from collaborative editing behaviors in online debates, we seek to capture implicit revision patterns in order to develop approaches aimed at guiding writers in how to further improve their arguments. We systematically compare the ability of common word embedding models to capture the differences between different versions of the same text, and we analyze their impact on various types of writing issues. To deal with the noisy nature of revision-based corpora, we propose a new sampling strategy based on revision distance. Opposed to approaches from prior work, such sampling can be done without employing additional annotations and judgments. Moreover, we provide evidence that using contextual information and domain knowledge can further improve prediction results. How useful a certain type of context is, depends on the issue the claim is suffering from, though.

Cite

CITATION STYLE

APA

Skitalinskaya, G., & Wachsmuth, H. (2023). To Revise or Not to Revise: Learning to Detect Improvable Claims for Argumentative Writing Support. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 15799–15816). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.880

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free