Exploration of Annotation Strategies for Automatic Short Answer Grading

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Automatic Short Answer Grading aims to automatically grade short answers authored by students. Recent work has shown that this task can be effectively reformulated as a Natural Language Inference problem. State-of-the-art is defined by the use of large pretrained language models fine-tuned in the domain dataset. But how to quantify the effectiveness of the models in small data regimes still remains an open issue. In this work we present a set of experiments to analyse the impact of different annotation strategies when not enough training examples for fine-tuning the model are available. We find that when annotating few examples, it is preferable to have more question variability than more answers per question. With this annotation strategy, our model outperforms state-of-the-art systems utilizing only 10% of the full-training set. Finally, experiments show that the use of out-of-domain annotated question-answer examples can be harmful when fine-tuning the models.

Cite

CITATION STYLE

APA

Egaña, A., Aldabe, I., & de Lacalle, O. L. (2023). Exploration of Annotation Strategies for Automatic Short Answer Grading. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13916 LNAI, pp. 377–388). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-36272-9_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free