Abstract
Decision trees have favorable properties, including interpretability, high computational efficiency, and the ability to learn from little training data. Learning a decision tree is known to be NP-complete. The researchers have proposed many greedy algorithms such as CART to learn approximate solutions. Inspired by the current popular neural networks, soft trees that support end-to-end training with back-propagation have attracted more and more attention. However, existing soft trees either lose the interpretability due to the continuous relaxation or employ the two-stage method of end-to-end building and then pruning. In this paper, we propose One-Stage Tree to build and prune the decision tree jointly through a bilevel optimization problem. Moreover, we leverage the reparameterization trick and proximal iterations to keep the tree discrete during end-to-end training. As a result, One-Stage Tree reduces the performance gap between training and testing and maintains the advantage of interpretability. Extensive experiments demonstrate that the proposed One-Stage Tree outperforms CART and the existing soft trees on classification and regression tasks.
Author supplied keywords
Cite
CITATION STYLE
Xu, Z., Zhu, G., Yuan, C., & Huang, Y. (2022). One-Stage Tree: end-to-end tree builder and pruner. Machine Learning, 111(5), 1959–1985. https://doi.org/10.1007/s10994-021-06094-4
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.