Towards a linear combination of dichotomizers by margin maximization

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

When dealing with two-class problems the combination of several dichotomizers is an established technique to improve the classification performance. In this context the margin is considered a central concept since several theoretical results show that improving the margin on the training set is beneficial for the generalization error of a classifier. In particular, this has been analyzed with reference to learning algorithms based on boosting which aim to build strong classifiers through the combination of many weak classifiers. In this paper we try to experimentally verify if the margin maximization can be beneficial also when combining already trained classifiers. We have employed an algorithm for evaluating the weights of a linear convex combination of dichotomizers so as to maximize the margin of the combination on the training set. Several experiments performed on publicly available data sets have shown that a combination based on margin maximization could be particularly effective if compared with other established fusion methods. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Marrocco, C., Molinara, M., Ricamato, M. T., & Tortorella, F. (2009). Towards a linear combination of dichotomizers by margin maximization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5716 LNCS, pp. 1043–1052). https://doi.org/10.1007/978-3-642-04146-4_111

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free