The dominating paradigm for content scoring is to learn an instance-based model, i.e. to use lexical features derived from the learner answers themselves. An alternative approach that receives much less attention is however to learn a similarity-based model. We introduce an architecture that efficiently learns a similarity model and find that results on the standard ASAP dataset are on par with a BERT-based classification approach.
CITATION STYLE
Bexte, M., Horbach, A., & Zesch, T. (2022). Similarity-Based Content Scoring - How to Make S-BERT Keep Up With BERT. In BEA 2022 - 17th Workshop on Innovative Use of NLP for Building Educational Applications, Proceedings (pp. 118–123). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.bea-1.16
Mendeley helps you to discover research relevant for your work.