We consider methods for kernel regression when the explanatory and/or response variables are adjusted prior to substitution into a conventional estimator. This "data-sharpening" procedure is designed to preserve the advantages of relatively simple, low-order techniques, for example, their robustness against design sparsity problems, yet attain the sorts of bias reductions that are commonly associated only with high-order methods. We consider Nadaraya-Watson and local-linear methods in detail, although data sharpening is applicable more widely. One approach in particular is found to give excellent performance. It involves adjusting both the explanatory and the response variables prior to substitution into a local linear estimator. The change to the explanatory variables enhances resistance of the estimator to design sparsity, by increasing the density of design points in places where the original density had been low. When combined with adjustment of the response variables, it produces a reduction in bias by an order of magnitude. Moreover, these advantages are available in multivariate settings. The data-sharpening step is simple to implement, since it is explicitly defined. It does not involve functional inversion, solution of equations or use of pilot bandwidths.
CITATION STYLE
Choi, E., Hall, P., & Rousson, V. (2000). Data sharpening methods for bias reduction in nonparametric regression. Annals of Statistics, 28(5), 1339–1355. https://doi.org/10.1214/aos/1015957396
Mendeley helps you to discover research relevant for your work.