Data sharpening methods for bias reduction in nonparametric regression

35Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

We consider methods for kernel regression when the explanatory and/or response variables are adjusted prior to substitution into a conventional estimator. This "data-sharpening" procedure is designed to preserve the advantages of relatively simple, low-order techniques, for example, their robustness against design sparsity problems, yet attain the sorts of bias reductions that are commonly associated only with high-order methods. We consider Nadaraya-Watson and local-linear methods in detail, although data sharpening is applicable more widely. One approach in particular is found to give excellent performance. It involves adjusting both the explanatory and the response variables prior to substitution into a local linear estimator. The change to the explanatory variables enhances resistance of the estimator to design sparsity, by increasing the density of design points in places where the original density had been low. When combined with adjustment of the response variables, it produces a reduction in bias by an order of magnitude. Moreover, these advantages are available in multivariate settings. The data-sharpening step is simple to implement, since it is explicitly defined. It does not involve functional inversion, solution of equations or use of pilot bandwidths.

Cite

CITATION STYLE

APA

Choi, E., Hall, P., & Rousson, V. (2000). Data sharpening methods for bias reduction in nonparametric regression. Annals of Statistics, 28(5), 1339–1355. https://doi.org/10.1214/aos/1015957396

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free