Empirical evaluation of ensemble techniques for a pittsburgh learning classifier system

18Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Ensemble techniques have proved to be very successful in boosting the performance of several types of machine learning methods. In this paper, we illustrate its usefulness in combination with GAssist, a Pittsburgh-style Learning Classifier System. Two types of ensembles are tested. First we evaluate an ensemble for consensus prediction. In this case several rule sets learnt using GAssist with different initial random seeds are combined using a flat voting scheme in a fashion similar to bagging. The second type of ensemble is intended to deal more efficiently with ordinal classification problems. That is, problems where the classes have some intrinsic order between them and, in case of misclassification, it is preferred to predict a class that is close to the correct one within the class intrinsic order. The ensemble for consensus prediction is evaluated using 25 datasets from the UCI repository. The hierarchical ensemble is evaluated using a Bioinformatics dataset. Both methods significantly improve the performance and behaviour of GAssist in all the tested domains. © 2008 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Bacardit, J., & Krasnogor, N. (2008). Empirical evaluation of ensemble techniques for a pittsburgh learning classifier system. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4998 LNAI, pp. 255–268). Springer Verlag. https://doi.org/10.1007/978-3-540-88138-4_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free