Quasi-Monte Carlo

  • Glasserman P
N/ACitations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In Monte Carlo (MC) sampling the sample averages of random quantities are used to estimate the corresponding expectations. The justification is through the law of large numbers. In quasi-Monte Carlo (QMC) sampling we are able to get a law of large numbers with deterministic inputs instead of random ones. Naturally we seek deterministic inputs that make the answer converge as quickly as possible. In particular it is common for QMC to produce much more accurate answers than MC does. Keller [19] was an early proponent of QMC methods for computer graphics. We begin by reviewing Monte Carlo sampling and showing how many prob-lems can be reduced to integrals over the unit cube [0, 1) d . Next we consider how stratification methods, such as jittered sampling, can improve the accuracy of Monte Carlo for favorable functions while doing no harm for unfavorable ones. Method of multiple-stratification such as Latin hypercube sampling (n-rooks) rep-resent a significant improvement on stratified sampling. These stratification meth-ods balance the sampling points with respect to a large number of hyperrectangular boxes. QMC may be thought of as an attempt to take this to the logical limit: how close can we get to balancing the sample points with respect to every box in [0, 1) d at once? The answer, provided by the theory of discrepancy is surprisingly far, and that the result produce a significant improvement compared to MC. This chapter concludes with a presentation of digital nets, integration lattices and randomized QMC.

Cite

CITATION STYLE

APA

Glasserman, P. (2004). Quasi-Monte Carlo (pp. 281–337). https://doi.org/10.1007/978-0-387-21617-1_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free