How to obtain efficient high reliabilities in assessing texts: Rubrics vs comparative judgement

4Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

It is very difficult and time consuming to assess texts. Even after great effort there is a small chance independent raters would agree on their mutual ratings which undermines the reliability of the rating. Several assessment methods and their merits are described in literature, among them the use of rubrics and the use of comparative judgement (CJ). In this study we investigate which of the two methods is more efficient in obtaining reliable outcomes when used for assessing texts. The same 12 texts are assessed in both a rubric and CJ condition by the same 6 raters. Results show an inter-rater reliability of.30 for the rubric condition and an inter-rater reliability of.84 in the CJ condition after the same amount of time invested in the respective methods. Therefore we conclude that CJ is far more efficient in obtaining high reliabilities when used to asses texts. Also suggestions for further research are made.

Cite

CITATION STYLE

APA

Goossens, M., & De Maeyer, S. (2018). How to obtain efficient high reliabilities in assessing texts: Rubrics vs comparative judgement. In Communications in Computer and Information Science (Vol. 829, pp. 13–25). Springer Verlag. https://doi.org/10.1007/978-3-319-97807-9_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free