Automating model building in c-rater

22Citations
Citations of this article
99Readers
Mendeley users who have this article in their library.

Abstract

c-rater is Educational Testing Service's technology for the content scoring of short student responses. A major step in the scoring process is Model Building where variants of model answers are generated that correspond to the rubric for each item or test question. Until recently, Model Building was knowledge-engineered (KE) and hence labor and time intensive. In this paper, we describe our approach to automating Model Building in c-rater. We show that c-rater achieves comparable accuracy on automatically built and KE models.

Cite

CITATION STYLE

APA

Sukkarieh, J. Z., & Stoyanchev, S. (2009). Automating model building in c-rater. In TextInfer 2009 - 2009 Workshop on Applied Textual Inference at the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, ACL-IJCNLP 2009 - Proceedings (pp. 61–69). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1708141.1708153

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free