Interrater Reliability: Comparison of essay's tests and scoring rubrics

1Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This comparative study aimed to examine the difference of interrater reliability between two different methods of measurement: essay tests and rubrics scoring. Thirty students and thirty science teachers were participated in this study. The interrater reliability was estimated using Fleiss Kappa. While the hypotheses were tested using Mann-Witney U with Exact to increase data validity. The results of this study showed that interrater reliability of restricted response items was higher than context dependent tasks as well as applied to extended response items scored using the analytic rubric and the holistic rubric. While the interrater reliability of extended response items is higher if its compered to background dependent tasks which was scored using the analytic rubric and the holistic rubric.

Cite

CITATION STYLE

APA

Wahyuni, L. D., Gumela, G., & Maulana, H. (2021). Interrater Reliability: Comparison of essay’s tests and scoring rubrics. In Journal of Physics: Conference Series (Vol. 1933). IOP Publishing Ltd. https://doi.org/10.1088/1742-6596/1933/1/012081

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free