Entropy regularized LPBoost

32Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we discuss boosting algorithms that maximize the soft margin of the produced linear combination of base hypotheses. LPBoost is the most straightforward boosting algorithm for doing this. It maximizes the soft margin by solving a linear programming problem. While it performs well on natural data, there are cases where the number of iterations is linear in the number of examples instead of logarithmic. By simply adding a relative entropy regularization to the linear objective of LPBoost, we arrive at the Entropy Regularized LPBoost algorithm for which we prove a logarithmic iteration bound. A previous algorithm, called SoftBoost, has the same iteration bound, but the generalization error of this algorithm often decreases slowly in early iterations. Entropy Regularized LPBoost does not suffer from this problem and has a simpler, more natural motivation. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Warmuth, M. K., Glocer, K. A., & Vishwanathan, S. V. N. (2008). Entropy regularized LPBoost. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5254 LNAI, pp. 256–271). https://doi.org/10.1007/978-3-540-87987-9_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free