The majority of current methods in object classification use the one-against-rest training scheme. We argue that when applied to a large number of classes, this strategy is problematic: as the number of classes increases, the negative class becomes a very large and complicated collection of images. The resulting classification problem then becomes extremely unbalanced, and kernel SVM classifiers trained on such sets require long training time and are slow in prediction. To address these problems, we propose to consider the negative class as a background and characterize it by a prior distribution. Further, we propose to construct "hybrid" classifiers, which are trained to separate this distribution from the samples of the positive class. A typical classifier first projects (by a function which may be non-linear) the inputs to a one-dimensional space, and then thresholds this projection. Theoretical results and empirical evaluation suggest that, after projection, the background has a relatively simple distribution, which is much easier to parameterize and work with. Our results show that hybrid classifiers offer an advantage over SVM classifiers, both in performance and complexity, especially when the negative (background) class is large. © 2012 Springer-Verlag.
CITATION STYLE
Osadchy, M., Keren, D., & Fadida-Specktor, B. (2012). Hybrid classifiers for object classification with a rich background. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7576 LNCS, pp. 284–297). https://doi.org/10.1007/978-3-642-33715-4_21
Mendeley helps you to discover research relevant for your work.