Prompt- and Trait Relation-aware Cross-prompt Essay Trait Scoring

13Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

Automated essay scoring (AES) aims to score essays written for a given prompt, which defines the writing topic. Most existing AES systems assume to grade essays of the same prompt as used in training and assign only a holistic score. However, such settings conflict with real-education situations; pre-graded essays for a particular prompt are lacking, and detailed trait scores of sub-rubrics are required. Thus, predicting various trait scores of unseen-prompt essays (called cross-prompt essay trait scoring) is a remaining challenge of AES. In this paper, we propose a robust model: prompt- and trait relation-aware cross-prompt essay trait scorer. We encode prompt-aware essay representation by essay-prompt attention and utilizing the topic-coherence feature extracted by the topic-modeling mechanism without access to labeled data; therefore, our model considers the prompt adherence of an essay, even in a cross-prompt setting. To facilitate multi-trait scoring, we design trait-similarity loss that encapsulates the correlations of traits. Experiments prove the efficacy of our model, showing state-of-the-art results for all prompts and traits. Significant improvements in low-resource-prompt and inferior traits further indicate our model's strength.

Cite

CITATION STYLE

APA

Do, H., Kim, Y., & Lee, G. G. (2023). Prompt- and Trait Relation-aware Cross-prompt Essay Trait Scoring. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 1538–1551). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.98

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free