Developing, evaluating and validating a scoring rubric for written case reports

17Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.

Abstract

OBJECTIVE: The purpose of this study was to evaluate Family Medicine Clerkship students' writing skills using an anchored scoring rubric. In this study, we report on the assessment of a current scoring rubric (SR) used to grade written case description papers (CDP) for medical students, describe the development of a revised SR with examination of scoring consistency among faculty raters, and report on feedback from students regarding SR revisions and written CDP. METHODS: Five faculty members scored a total of eighty-three written CDP using both the Original SR (OSR) and the Revised SR1 (RSR1) during the 2009-2010 academic years. RESULTS: Overall increased faculty inter-rater reliability was obtained using the RSR1. Additionally, this subset analysis revealed that the five faculty using the Revised SR2 (RSR2) had a high measure of inter-rater reliability on their scoring of this subset of papers (as measured by intra-class correlation (ICC) with ICC = 0.93, p = 0.001. CONCLUSIONS: Findings from this research have implications for medical education, by highlighting the importance of the assessment and development of reliable evaluation tools for medical student writing projects.

Cite

CITATION STYLE

APA

Cyr, P. R., Smith, K. A., Broyles, I. L., & Holt, C. T. (2014). Developing, evaluating and validating a scoring rubric for written case reports. International Journal of Medical Education, 5, 18–23. https://doi.org/10.5116/ijme.52c6.d7ef

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free