Improving peer assessment modeling of teacher’s grades, the case of OpenAnswer

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Questions with answers are rarely used as e-learning assessment tools because of the resulting high workload for the teacher/tutor that should grade them, which can be mitigated by doing peer-assessment. In OpenAnswer we modeled peer-assessment as a Bayesian network connecting the sub-networks representing any participating student to the corresponding answers of her graded peers. The model has shown good ability to predict (without further info from the teacher) the exact teacher’s grade (ground truth) from the peer grades only, and a very good ability to predict it within 1 mark from the right one. In this paper we explore changes to the OpenAnswer model to improve its predictions. The experimental results, obtained by simulating teacher’s grading on real datasets, show the improved predictions.

Cite

CITATION STYLE

APA

De Marsico, M., Sterbini, A., & Temperini, M. (2017). Improving peer assessment modeling of teacher’s grades, the case of OpenAnswer. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10108 LNCS, pp. 601–613). Springer Verlag. https://doi.org/10.1007/978-3-319-52836-6_64

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free