Representing Scoring Rubrics as Graphs for Automatic Short Answer Grading

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

To score open ended responses, researchers often design a scoring rubric. Rubrics can help produce more consistent ratings and reduce bias. This project explores whether an automated short answer grading model can learn information from a scoring rubric to produce ratings closer to that of a human. We explore the impact of adding an additional transformer encoder layer to a BERT model and training the weights of this extra layer with only the scoring rubric text. Additionally, we experiment with using Node2Vec sampling to capture the graph-like ordinal structure in the rubric text to further pre-train the model. Results show superior model performance when further pre-training with the scoring rubric text. Specifically, questions that elicit a very simple rubric structure show the most improvement from incorporating rubric text. Using Node2Vec to capture the structure of the text had an inconclusive impact.

Cite

CITATION STYLE

APA

Condor, A., Pardos, Z., & Linn, M. (2022). Representing Scoring Rubrics as Graphs for Automatic Short Answer Grading. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13355 LNCS, pp. 354–365). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-11644-5_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free