Implicit Phenomena in Short-Answer Scoring Data

3Citations
Citations of this article
58Readers
Mendeley users who have this article in their library.

Abstract

Short-answer scoring is the task of assessing the correctness of a short text given as response to a question that can come from a variety of educational scenarios. As only content, not form, is important, the exact wording including the explicitness of an answer should not matter. However, many state-of-the-art scoring models heavily rely on lexical information, be it word embeddings in a neural network or n-grams in an SVM. Thus, the exact wording of an answer might very well make a difference. We therefore quantify to what extent implicit language phenomena occur in short answer datasets and examine the influence they have on automatic scoring performance. We find that the level of implicitness depends on the individual question, and that some phenomena are very frequent. Resolving implicit wording to explicit formulations indeed tends to improve automatic scoring performance.

Cite

CITATION STYLE

APA

Bexte, M., Horbach, A., & Zesch, T. (2021). Implicit Phenomena in Short-Answer Scoring Data. In UNIMPLICIT 2021 - 1st Workshop on Understanding Implicit and Underspecified Language, Proceedings of the Workshop (pp. 11–19). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.unimplicit-1.2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free