Inductive Principles for Learning from Data

  • Cherkassky V
N/ACitations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Face detection/recognition applications involve ill-defined concepts (such as 'face' or 'person') that cannot be specified a priori in terms of a small set of features. This implies the need for learning unknown class decision boundaries from data (i.e., images with known class labels). This task is a special case of a generic problem of predictive classification or pattern recognition, where the goal is to estimate class decision boundaries using available (training) data. There are many learning methods (i.e., constructive algorithms) for predictive classification. However, most approaches are heuristic, due to inherent complexity of estimation with finite data, and the lack of conceptual framework. This paper describes several principled approaches (called inductive principles) for estimating dependencies from data. The focus is on the general conceptual framework and on the major issues related to learning, rather than on specific learning algorithms. The following inductive principles for learning with finite data are described: penalization, structural risk minimization, Bayesian inference and minimum description length. Finally, we briefly describe a new powerful learning algorithm called Support Vector Machine, and its applications to face detection.

Cite

CITATION STYLE

APA

Cherkassky, V. (1998). Inductive Principles for Learning from Data. In Face Recognition (pp. 86–107). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-72201-1_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free