Discriminant analysis for two data sets in ℝd with probability densities f and g can be based on the estimation of the set G = {x: f(x) ≥ g(x)}. We consider applications where it is appropriate to assume that the region G has a smooth boundary or belongs to another nonparametric class of sets. In particular, this assumption makes sense if discrimination is used as a data analytic tool. Decision rules based on minimization of empirical risk over the whole class of sets and over sieves are considered. Their rates of convergence are obtained. We show that these rules achieve optimal rates for estimation of G and optimal rates of convergence for Bayes risks. An interesting conclusion is that the optimal rates for Bayes risks can be very fast, in particular, faster than the "parametric" root-n rate. These fast rates cannot be guaranteed for plug-in rules.
CITATION STYLE
Mammen, E., & Tsybakov, A. B. (1999). Smooth discrimination analysis. Annals of Statistics, 27(6), 1808–1829. https://doi.org/10.1214/aos/1017939240
Mendeley helps you to discover research relevant for your work.