Explaining Decision-Tree Predictions by Addressing Potential Conflicts between Predictions and Plausible Expectations

0Citations
Citations of this article
41Readers
Mendeley users who have this article in their library.

Abstract

We offer an approach to explain Decision Tree (DT) predictions by addressing potential conflicts between aspects of these predictions and plausible expectations licensed by background information. We define four types of conflicts, operationalize their identification, and specify explanatory schemas that address them. Our human evaluation focused on the effect of explanations on users' understanding of a DT's reasoning and their willingness to act on its predictions. The results show that (1) explanations that address potential conflicts are considered at least as good as baseline explanations that just follow a DT path; and (2) the conflict-based explanations are deemed especially valuable when users' expectations disagree with the DT's predictions.

Cite

CITATION STYLE

APA

Maruf, S., Zukerman, I., Reiter, E., & Haffari, G. (2021). Explaining Decision-Tree Predictions by Addressing Potential Conflicts between Predictions and Plausible Expectations. In INLG 2021 - 14th International Conference on Natural Language Generation, Proceedings (pp. 114–127). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.inlg-1.12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free