Point process-based Monte Carlo estimation

8Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper addresses the issue of estimating the expectation of a real-valued random variable of the form X= g(U) where g is a deterministic function and U can be a random finite- or infinite-dimensional vector. Using recent results on rare event simulation, we propose a unified framework for dealing with both probability and mean estimation for such random variables, i.e. linking algorithms such as Tootsie Pop Algorithm or Last Particle Algorithm with nested sampling. Especially, it extends nested sampling as follows: first the random variable X does not need to be bounded any more: it gives the principle of an ideal estimator with an infinite number of terms that is unbiased and always better than a classical Monte Carlo estimator—in particular it has a finite variance as soon as there exists k∈ R> 1 such that E [Xk] < ∞. Moreover we address the issue of nested sampling termination and show that a random truncation of the sum can preserve unbiasedness while increasing the variance only by a factor up to 2 compared to the ideal case. We also build an unbiased estimator with fixed computational budget which supports a Central Limit Theorem and discuss parallel implementation of nested sampling, which can dramatically reduce its running time. Finally we extensively study the case where X is heavy-tailed.

Cite

CITATION STYLE

APA

Walter, C. (2017). Point process-based Monte Carlo estimation. Statistics and Computing, 27(1), 219–236. https://doi.org/10.1007/s11222-015-9617-y

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free