Learning with randomized majority votes

5Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We propose algorithms for producing weighted majority votes that learn by probing the empirical risk of a randomized (uniformly weighted) majority vote-instead of probing the zero-one loss, at some margin level, of the deterministic weighted majority vote as it is often proposed. The learning algorithms minimize a risk bound which is convex in the weights. Our numerical results indicate that learners producing a weighted majority vote based on the empirical risk of the randomized majority vote at some finite margin have no significant advantage over learners that achieve this same task based on the empirical risk at zero margin. We also find that it is sufficient for learners to minimize only the empirical risk of the randomized majority vote at a fixed number of voters without considering explicitly the entropy of the distribution of voters. Finally, our extensive numerical results indicate that the proposed learning algorithms are producing weighted majority votes that generally compare favorably to those produced by AdaBoost. © 2010 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Lacasse, A., Laviolette, F., Marchand, M., & Turgeon-Boutin, F. (2010). Learning with randomized majority votes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6322 LNAI, pp. 162–177). https://doi.org/10.1007/978-3-642-15883-4_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free