From Decision Trees to Explained Decision Sets

4Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Recent work demonstrated that path explanation redundancy is ubiquitous in decision trees, i.e. most often paths in decision trees include literals that are redundant for explaining a prediction. The implication of this result is that decision trees must be explained. Nevertheless, there are applications of DTs where running an explanation algorithm is impractical. For example, in settings that are time or power constrained, running software algorithms for explaining predictions would be undesirable. Although the explanations for paths in DTs do not generally represent themselves a decision tree, this paper shows that one can construct a decision set from some of the decision tree explanations, such that the decision set is not only explained, but it also exhibits a number of properties that are critical for replacing the original decision tree.

Cite

CITATION STYLE

APA

Huang, X., & Marques-Silva, J. (2023). From Decision Trees to Explained Decision Sets. In Frontiers in Artificial Intelligence and Applications (Vol. 372, pp. 1100–1108). IOS Press BV. https://doi.org/10.3233/FAIA230384

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free