Many algorithms for inferring a decision tree from data involve a two-phase process: First, a very large decision tree is grown which typically ends up "over-fitting" the data. To reduce over-fitting, in the second phase, the tree is pruned using one of a number of available methods. The final tree is then output and used for classification on test data. In this paper, we suggest an alternative approach to the pruning phase. Using a given unpruned decision tree, we present a new method of making predictions on test data, and we prove that our algorithm's performance will not be "much worse" (in a precise technical sense) than the predictions made by the best reasonably small pruning of the given decision tree. Thus, our procedure is guaranteed to be competitive (in terms of the quality of its predictions) with any pruning algorithm. We prove that our procedure is very efficient and highly robust. Our method can be viewed as a synthesis of two previously studied techniques. First, we apply Cesa-Bianchi et al.'s [3] results on predicting using "expert advice" (where we view each pruning as an "expert") to obtain an algorithm that has provably low prediction loss, but that is computationally infeasible. Next, we generalize and apply a method developed by Buntine [2, 1] and Willems, Shtarkov and Tjalkens [18, 19] to derive a very efficient implementation of this procedure.
CITATION STYLE
Helmbold, D. P., & Schapire, R. E. (1995). Predicting nearly as well as the best pruning of a decision tree. In Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995 (Vol. 1995-January, pp. 61–68). Association for Computing Machinery, Inc. https://doi.org/10.1145/225298.225305
Mendeley helps you to discover research relevant for your work.