Tree Space Prototypes: Another Look at Making Tree Ensembles Interpretable

29Citations
Citations of this article
83Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Ensembles of decision trees perform well on many problems, but are not interpretable. In contrast to existing approaches in interpretability that focus on explaining relationships between features and predictions, we propose an alternative approach to interpret tree ensemble classifiers by surfacing representative points for each class-prototypes. We introduce a new distance for Gradient Boosted Tree models, and propose new, adaptive prototype selection methods with theoretical guarantees, with the flexibility to choose a different number of prototypes in each class. We demonstrate our methods on random forests and gradient boosted trees, showing that the prototypes can perform as well as or even better than the original tree ensemble when used as a nearest-prototype classifier. In a user study, humans were better at predicting the output of a tree ensemble classifier when using prototypes than when using Shapley values, a popular feature attribution method. Hence, prototypes present a viable alternative to feature-based explanations for tree ensembles.

Cite

CITATION STYLE

APA

Tan, S., Soloviev, M., Hooker, G., & Wells, M. T. (2020). Tree Space Prototypes: Another Look at Making Tree Ensembles Interpretable. In FODS 2020 - Proceedings of the 2020 ACM-IMS Foundations of Data Science Conference (pp. 23–34). Association for Computing Machinery, Inc. https://doi.org/10.1145/3412815.3416893

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free