Convergence Rate of Sieve Estimates

  • Shen X
  • Wong W
N/ACitations
Citations of this article
51Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we develop a general theory for the convergence rateof sieve estimates, maximum likelihood estimates (MLE's) and relatedestimates obtained by optimizing certain empirical criteria in generalparameter spaces. In many cases, especially when the parameter spaceis infinite dimensional, maximization over the whole parameter spaceis undesirable. In such cases, one has to perform maximization overan approximating space (sieve) of the original parameter space andallow the size of the approximating space to grow as the sample sizeincreases. This method is called the method of sieves. In the caseof the maximum likelihood estimation, an MLE based on a sieve iscalled a sieve MLE. We found that the convergence rate of a sieveestimate is governed by (a) the local expected values, variancesand L_2 entropy of the criterion differences and (b) the approximationerror of the sieve. A robust nonparametric regression problem, amixture problem and a nonparametric regression problem are discussedas illustrations of the theory. We also found that when the underlyingspace is too large, the estimate based on optimizing over the wholeparameter space may not achieve the best possible rates of convergence,whereas the sieve estimate typically does not suffer from this difficulty.

Cite

CITATION STYLE

APA

Shen, X., & Wong, W. H. (2007). Convergence Rate of Sieve Estimates. The Annals of Statistics, 22(2). https://doi.org/10.1214/aos/1176325486

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free