Gradient descent style leveraging of decision trees and stumps for misclassification cost performance

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper investigates the use, for the task of classifier learning in the presence of misclassification costs, of some gradient descent style leveraging approaches to classifier learning: Schapire and Singer’s AdaBoost.MH and AdaBoost.MR [16], and Collins et al’s multiclass logistic regression method [4], and some modifications that retain the gradient descent style approach. Decision trees and stumps are used as the underlying base classifiers, learned from modified versions of Quinlan’s C4.5 [15]. Experiments are reported comparing the performance, in terms of average cost, of the modified methods to that of the originals, and to the previously suggested “Cost Boosting” methods of Ting and Zheng [21] and Ting [18], which also use decision trees based upon modified C4.5 code, but do not have an interpretation in the gradient descent framework. While some of the modifications improve upon the originals in terms of cost performance for both trees and stumps, the comparison with tree-based Cost Boosting suggests that out of the methods first experimented with here, it is one based on stumps that has the most promise.

Cite

CITATION STYLE

APA

Cameron-Jones, M. (2001). Gradient descent style leveraging of decision trees and stumps for misclassification cost performance. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2256, pp. 107–118). Springer Verlag. https://doi.org/10.1007/3-540-45656-2_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free