Abstract
Some language exams have multiple writing tasks. When a learner writes multiple texts in a language exam, it is not surprising that the quality of these texts tends to be similar, and the existing automated text scoring (ATS) systems do not explicitly model this similarity. In this paper, we suggest that it could be useful to include the other texts written by this learner in the same exam as extra references in an ATS system. We propose various approaches of fusing information from multiple tasks and pass this authorship knowledge into our ATS model on six different datasets. We show that this can positively affect the model performance in most cases.
Cite
CITATION STYLE
Zhang, M., Chen, X., Cummins, R., Andersen, Ø., & Briscoe, T. (2018). The effect of adding authorship knowledge in automated text scoring. In Proceedings of the 13th Workshop on Innovative Use of NLP for Building Educational Applications, BEA 2018 at the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HTL 2018 (pp. 305–314). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-0536
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.