This paper presents an annotation project that explores the relationship between textual entailment and short answer scoring (SAS). We annotate entailment relations between learner and target answers in the Corpus of Reading Comprehension Exercises for German (CREG) with a finegrained label inventory and compare them in various ways to correctness scores assigned by teachers. Our main finding is that although both tasks are clearly related, not all of our entailment tags can be directly mapped to SAS scores and that especially the area of partial entailment covers instances that are problematic for automatic scoring and need further investigation.
CITATION STYLE
Ostermann, S., Horbach, A., & Pinkal, M. (2015). Annotating Entailment Relations for Shortanswer Questions. In Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications, NLP-TEA 2015 - in conjunction with the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, ACL-IJCNLP 2015 (pp. 49–58). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w15-4408
Mendeley helps you to discover research relevant for your work.