Currently, the most effective constructions of low-discrepancy point sets and sequences are based on the theory of (t, m, s)-nets and (t, s)-sequences. In this work we discuss parallelization techniques for quasi-Monte Carlo integration using (t, s)-sequences. We show that leapfrog parallelization may be very dangerous whereas block-based parallelization turns out to be robust.
CITATION STYLE
Schmid, W. C., & Uhl, A. (1999). Parallel quasi-Monte Carlo integration using (t,s)-sequences. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1557, pp. 96–106). Springer Verlag. https://doi.org/10.1007/3-540-49164-3_10
Mendeley helps you to discover research relevant for your work.