C-rater: Automated scoring of short-answer questions

284Citations
Citations of this article
161Readers
Mendeley users who have this article in their library.
Get full text

Abstract

C-rater is an automated scoring engine that has been developed to score responses to content-based short answer questions. It is not simply a string matching program - instead it uses predicate argument structure, pronominal reference, morphological analysis and synonyms to assign full or partial credit to a short answer question. C-rater has been used in two studies: National Assessment for Educational Progress (NAEP) and a statewide assessment in Indiana. In both studies, c-rater agreed with human graders about 84% of the time. © 2003 Kluwer Academic Publishers.

Cite

CITATION STYLE

APA

Leacock, C., & Chodorow, M. (2003). C-rater: Automated scoring of short-answer questions. Computers and the Humanities, 37(4), 389–405. https://doi.org/10.1023/A:1025779619903

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free