Said another way: asking the right questions regarding the effectiveness of simulations.

4Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Applying simulations in healthcare practice and education is increasingly accepted, yet a number of recent authors have questioned the effectiveness of these technologies. The contention is that while high-fidelity simulators may contribute to educational gains, their gains compared to low-tech alternatives are often "not significant." That assessment, however, and the evidence it is based on, may be a consequence of asking the wrong questions. Typical studies often compare a measure for "average success" for one group's members versus another's on some criteria, but this can mask important information about the "tails" of the distribution for how trainees are performing. An alternative approach, adapted from quality control, compares error rates for each group in the experiment, in aggregate. The statistical results of evaluations can change if this method is used, as illustrated by a recent study showing that simulation training can significantly reduce the frequency of medication administration errors among student nurses on placement. The paper includes a case study to tangibly demonstrate how the way we frame our evaluation test question can reverse the apparent statistical finding of the significance test. © 2010 Wiley Periodicals, Inc.

Cite

CITATION STYLE

APA

Goodman, W. M., & Lamers, A. (2010). Said another way: asking the right questions regarding the effectiveness of simulations. Nursing Forum, 45(4), 246–252. https://doi.org/10.1111/j.1744-6198.2010.00199.x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free