Assessing the assessment in emergency care training

16Citations
Citations of this article
46Readers
Mendeley users who have this article in their library.

Abstract

construct validity, consisting of a clinical and a communication subscale. The internal consistency of the (sub)scales was high (a5.93/.91/.86). The inter-rater reliability was moderate for the clinical competency subscale (.49) and the global performance scale (.50), but poor for the communication subscale (.27). A generalizability study showed that for a reliable assessment 5-13 raters are needed when using checklists, and four when using the clinical competency scale or the global performance scale. Conclusions: This study shows poor validity and reliability for assessing emergency skills with checklists but good validity and moderate reliability with clinical competency or global performance scales. Involving more raters can improve the reliability substantially. Recommendations are made to improve this high stakes skill assessment.

Cite

CITATION STYLE

APA

Dankbaar, M. E. W., Stegers-Jager, K. M., Baarveld, F., Van Merrienboer, J. J. G., Norman, G. R., Rutten, F. L., … Schuit, S. C. E. (2014). Assessing the assessment in emergency care training. PLoS ONE, 9(12). https://doi.org/10.1371/journal.pone.0114663

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free