Bootstrap Techniques for Error Estimation

151Citations
Citations of this article
93Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The design of a pattern recognition system requires careful attention to error estimation. The error rate is the most important descriptor of a classifier's performance. The commonly used estimates of error rate are based on the holdout method, the resubstitution method, and the leave-one-out method. All suffer either from large bias or large variance and their sample distributions are not known. Boot-strapping refers to a class of procedures that resample given data by computer. It permits determining the statistical properties of an estimator when very little is known about the underlying distribution and no additional samples are available. Since its publication in the last decade, the bootstrap technique has been successfully applied to many statistical estimations and inference problems. However, it has not been exploited in the design of pattern recognition systems. We report results on the application of several bootstrap techniques in estimating the error rate of 1-NN and quadratic classifiers. Our experiments show that, in most cases, the confidence interval of a bootstrap estimator of classification error is smaller than that of the leave-one-out estimator. The error of 1-NN, quadratic, and Fisher classifiers are estimated for several real data sets. Copyright © 1987 by The Institute of Electrical and Electronics Engineers. Inc.

Cite

CITATION STYLE

APA

Jain, A. K., Dubes, R. C., & Chen, C. C. (1987). Bootstrap Techniques for Error Estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-9(5), 628–633. https://doi.org/10.1109/TPAMI.1987.4767957

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free