Using generalization error bounds to train the set covering machine

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we eliminate the need for parameter estimation associated with the set covering machine (SCM) by directly minimizing generalization error bounds. Firstly, we consider a sub-optimal greedy heuristic algorithm termed the bound set covering machine (BSCM). Next, we propose the branch and bound set covering machine (BBSCM) and prove that it finds a classifier producing the smallest generalization error bound. We further justify empirically the BBSCM algorithm with a heuristic relaxation, called BBSCM(τ), which guarantees a solution whose bound is within a factor τ of the optimal. Experiments comparing against the support vector machine (SVM) and SCM algorithms demonstrate that the approaches proposed can lead to some or all of the following: 1) faster running times, 2) sparser classifiers and 3) competitive generalization error, all while avoiding the need for parameter estimation. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Hussain, Z., & Shawe-Taylor, J. (2008). Using generalization error bounds to train the set covering machine. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4984 LNCS, pp. 258–268). https://doi.org/10.1007/978-3-540-69158-7_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free