An integrated model for evaluating the amount of data required for reliable recognition

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many recognition procedures rely on the consistency of a subset of data features with an hypothesis, as the sufficient evidence to the presence of the corresponding object. The performance of such procedures are analyzed using a probabilistic model and provide expressions for the sufficient size of such data subsets, that, if consistent, guarantee the validity of the hypotheses with arbitrarily prespecified confidence. The analysis focuses on 2D objects and on the affine transformation class, and is based, for the first time, on an integrated model, which takes into account the shape of the objects involved, the accuracy of the data collected, the clutter present in the scene, the class of the transformations involved, the accuracy of the localization, and the confidence required in the hypotheses. Most of these factors can be quantified cumulatively by one parameter, denoted “effective similarity”, which largely determines the sufficient subset size.

Cite

CITATION STYLE

APA

Lindenbaum, M. (1996). An integrated model for evaluating the amount of data required for reliable recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1035, pp. 457–466). Springer Verlag. https://doi.org/10.1007/3-540-60793-5_99

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free