Current cross-prompt automated essay scoring (AES) is a challenging task due to the large discrepancies between different prompts, such as different genres and expressions. The main goal of current cross-prompt AES systems is to learn enough shared features between the source and target prompts to grade well on the target prompt. However, because the features are captured based on the original prompt representation, they may be limited by being extracted directly between essays. In fact, when the representations of two prompts are more similar, we can gain more shared features between them. Based on this motivation, in this paper, we propose a learning strategy called "prompt-mapping" to learn about more consistent representations of source and target prompts. In this way, we can obtain more shared features between the two prompts and use them to better represent the essays for the target prompt. Experimental results on the ASAP++ dataset demonstrate the effectiveness of our method. We also design experiments in different settings to show that our method can be applied in different scenarios. Our code is available at https://github.com/gdufsnlp/PMAES.
CITATION STYLE
Chen, Y., & Li, X. (2023). PMAES: Prompt-mapping Contrastive Learning for Cross-prompt Automated Essay Scoring. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 1489–1503). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.83
Mendeley helps you to discover research relevant for your work.