Automated essay scoring using Bayes' theorem

ISSN: 15402525
179Citations
Citations of this article
149Readers
Mendeley users who have this article in their library.

Abstract

Two Bayesian models for text classification from the information science field were extended and applied to student produced essays. Both models were calibrated using 462 essays with two score points. The calibrated systems were applied to 80 new, pre-scored essays with 40 essays in each score group. Manipulated variables included the two models; the use of words, phrases and arguments; two approaches to trimming; stemming; and the use of stopwords. While the text classification literature suggests the need to calibrate on thousands of cases per score group, accuracy of over 80% was achieved with the sparse dataset used in this study.

Cite

CITATION STYLE

APA

Rudner, L. M., & Liang, T. (2002). Automated essay scoring using Bayes’ theorem. Journal of Technology, Learning, and Assessment, 1(2), 1–22.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free