Sensitivity Based Generalization Error for Supervised Learning Problems with Application in Feature Selection

  • Yeung D
N/ACitations
Citations of this article
3Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Generalization error model provides a theoretical support for a classifier's performance in terms of prediction accuracy. However, existing models give very loose error bounds. This explains why classification systems generally rely on experimental validation for their claims on prediction accuracy. In this talk we will revisit this problem and explore the idea of developing a new generalization error model based on the assumption that only prediction accuracy on unseen points in a neighbourhood of a training point will be considered, since it will be unreasonable to require a classifier to accurately predict unseen points "far away" from training samples. The new error model makes use of the concept of sensitivity measure for an ensemble of multiplayer feedforward neural networks (Multi layer Perceptrons or Radial Basis Function Neural Networks). Two important applications will be demonstrated, model selection and feature reduction for RBFNN classifiers. A number of experimental results using datasets such as the UCI, the 99 KDD Cup, and text categorization, will be presented.

Cite

CITATION STYLE

APA

Yeung, D. S. (2009). Sensitivity Based Generalization Error for Supervised Learning Problems with Application in Feature Selection (pp. 3–3). https://doi.org/10.1007/978-3-642-03348-3_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free