We describe an improved batch-means procedure for building a confidence interval on a steady-state expected simulation response that is centered on the sample mean of a portion of the corresponding simulation-generated time series and satisfies a user-specified absolute or relative precision requirement. The theory supporting the new algorithm merely requires the output process to be weakly dependent (phi-mixing) so that for a sufficiently large batch size, the batch means are approximately multivariate normal but not necessarily uncorrelated. A variant of the method of nonoverlapping batch means (NOBM), the Automated Simulation Analysis Procedure (ASAP) operates as follows: the batch size is progressively increased until either (a) the batch means pass the von Neumann test for independence, and then ASAP delivers a classical NOBM confidence interval; or (b) the batch means pass the Shapiro-Wilk test for multivariate normality, and then ASAP delivers a corrected confidence interval. The latter correction is based on an inverted Cornish-Fisher expansion for the classical NOBM t-ratio, where the terms of the expansion are estimated via an autoregressive-moving average time series model of the batch means. An experimental performance evaluation demonstrates the advantages of ASAP versus other widely used batch-means procedures.
CITATION STYLE
Steiger, N. M., & Wilson, J. R. (1999). Improved batching for confidence interval construction in steady-state simulation. In Winter Simulation Conference Proceedings (Vol. 1, pp. 442–451). IEEE. https://doi.org/10.1145/324138.324278
Mendeley helps you to discover research relevant for your work.