Introduction: Competence as a leader of a pediatric resuscitation is challenging to assess during residency training due to the rarity of clinical opportunities as well as the complexity of the construct of resuscitation team leader competence. With the ability to provide standardized high fidelity clinical experiences using patient simulation, the greater challenge is now to provide educators with a valid and reliable evaluation tool. This study set out to develop such a comprehensive tool. Methods: Phase I: Using Delphi methodology, eight pediatric acute care experts participated in the development of a comprehensive evaluation tool. An extensive list of potential contributing items was developed from the literature and from an initial brainstorming meeting. Items were refined to produce a final evaluation tool containing only items of high importance and observability by consensus through the Delphi method. Phase II: Thirty pediatric residents were video recorded leading two standardized resuscitation scenarios on a high fidelity human patient simulator. The tool was piloted using randomly selected videos. Following piloting, three expert raters blinded to resident level of training independently rated resident leadership performance using the newly developed tool. A fourth expert rater used a global rating score. Evidence for the tool's face and content validity were built through use of the Delphi methodology in the tool's creation. Criterion validity, construct validity, reliability and generalizability were also measured. Results: An initial list of 58 potential items was reduced through two rounds of Delphi to 26 items divided into two sub-scales: leadership and communiction skills (Part 1), and knowledge and clinical skills (Part 2). Cronbach's alpha was found to be 0.818 for total scores, 0.827 for Part 1 and 0.673 for Part 2. Generalizability coefficient was 0.760, 0.844 and 0.482 for total, Part 1, and Part 2 scores respectively. A minimum of two raters and two scenarios were found to be necessary for stability of scores for the entire tool. Interrater reliability showed a medium to large strength of correlation for total scores, large strength of correlation for Part 1 scores and small to medium strength correlation for Part 2 scores. No significant difference was found for scores between scenarios. The correlation coefficient for global rating scores and evaluation tool scores was 0.702. Exploratory factor analysis supported the two subscales established in Phase I. Conclusion: A 26-item evaluation tool with two subscales was created and tested. Evidence was built for the tool's reliability and validity, with Part 1 performing better than Part 2. These results support the ability validly evaluate pediatric resuscitation team leader competence using simulation.
CITATION STYLE
Grant, V., Grant, E., Bhanji, F., Cheng, A., Duff, J., & Lockyer, J. (2010). Use of Simulation in the Development of a Valid and Reliable Pediatric Resuscitation Team Leadership Evaluation to ol. Paediatrics & Child Health, 15(suppl_A), 58A-58A. https://doi.org/10.1093/pch/15.suppl_a.58aa
Mendeley helps you to discover research relevant for your work.