Solving regression by learning an ensemble of decision rules

23Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We introduce a novel decision rule induction algorithm for solving the regression problem. There are only few approaches in which decision rules are applied to this type of prediction problems. The algorithm uses a single decision rule as a base classifier in the ensemble. Forward stagewise additive modeling is used in order to obtain the ensemble of decision rules. We consider two types of loss functions, the squared- and absolute-error loss, that are commonly used in regression problems. The minimization of empirical risk based on these loss functions is performed by two optimization techniques, the gradient boosting and the least angle technique. The main advantage of decision rules is their simplicity and good interpretability. The prediction model in the form of an ensemble of decision rules is powerful, which is shown by results of the experiment presented in the paper. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Dembczyński, K., Kotłowski, W., & Słowiński, R. (2008). Solving regression by learning an ensemble of decision rules. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5097 LNAI, pp. 533–544). https://doi.org/10.1007/978-3-540-69731-2_52

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free