On the assessment of Monte Carlo error in simulation-based Statistical analyses

  • Koehler E
  • Brown E
  • Haneuse S
  • 82


    Mendeley users who have this article in their library.
  • 57


    Citations of this article.


Statistical experiments, more commonly referred to as Monte Carlo or simulation studies, are used to study the behavior of statistical methods and measures under controlled situations. Whereas recent computing and methodological advances have permitted increased efficiency in the simulation process, known as variance reduction, such experiments remain limited by their finite nature and hence are subject to uncertainty; when a simulation is run more than once, different results are obtained. However, virtually no emphasis has been placed on reporting the uncertainty, referred to here as Monte Carlo error, associated with simulation results in the published literature, or on justifying the number of replications used. These deserve broader consideration. Here we present a series of simple and practical methods for estimating Monte Carlo error as well as determining the number of replications required to achieve a desired level of accuracy. The issues and methods are demonstrated with two simple examples, one evaluating operating characteristics of the maximum likelihood estimator for the parameters in logistic regression and the other in the context of using the bootstrap to obtain 95% confidence intervals. The results suggest that in many settings, Monte Carlo error may be more substantial than traditionally thought.

Author-supplied keywords

  • Bootstrap
  • Jackknife
  • Replication

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Get full text


  • Elizabeth Koehler

  • Elizabeth Brown

  • Sebastien J P A Haneuse

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free