Similarity-Based Content Scoring - How to Make S-BERT Keep Up With BERT

17Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

Abstract

The dominating paradigm for content scoring is to learn an instance-based model, i.e. to use lexical features derived from the learner answers themselves. An alternative approach that receives much less attention is however to learn a similarity-based model. We introduce an architecture that efficiently learns a similarity model and find that results on the standard ASAP dataset are on par with a BERT-based classification approach.

Cite

CITATION STYLE

APA

Bexte, M., Horbach, A., & Zesch, T. (2022). Similarity-Based Content Scoring - How to Make S-BERT Keep Up With BERT. In BEA 2022 - 17th Workshop on Innovative Use of NLP for Building Educational Applications, Proceedings (pp. 118–123). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.bea-1.16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free