Optimal Local Explainer Aggregation for Interpretable Prediction

4Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

A key challenge for decision makers when incorporating black box machine learned models into practice is being able to understand the predictions provided by these models. One set of methods proposed to address this challenge is that of training surrogate explainer models which approximate how the more complex model is computing its predictions. Explainer methods are generally classified as either local or global explainers depending on what portion of the data space they are purported to explain. The improved coverage of global explainers usually comes at the expense of explainer fidelity (i.e., how well the explainer's predictions match that of the black box model). One way of trading off the advantages of both approaches is to aggregate several local explainers into a single explainer model with improved coverage. However, the problem of aggregating these local explainers is computationally challenging, and existing methods only use heuristics to form these aggregations. In this paper, we propose a local explainer aggregation method which selects local explainers using non-convex optimization. In contrast to other heuristic methods, we use an integer optimization framework to combine local explainers into a near-global aggregate explainer. Our framework allows a decision-maker to directly tradeoff coverage and fidelity of the resulting aggregation through the parameters of the optimization problem. We also propose a novel local explainer algorithm based on information filtering. We evaluate our algorithmic framework on two healthcare datasets: the Parkinson's Progression Marker Initiative (PPMI) data set and a geriatric mobility dataset from the UCI machine learning repository. Our choice of these healthcare-related datasets is motivated by the anticipated need for explainable precision medicine. We find that our method outperforms existing local explainer aggregation methods in terms of both fidelity and coverage of classification. It also improves on fidelity over existing global explainer methods, particularly in multi-class settings, where state-of-the-art methods achieve 70% and ours achieves 90%.

Cite

CITATION STYLE

APA

Li, Q., Cummings, R., & Mintz, Y. (2022). Optimal Local Explainer Aggregation for Interpretable Prediction. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 12000–12007). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i11.21458

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free