Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs

13Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

Interpretability methods aim to help users build trust in and understand the capabilities of machine learning models. However, existing approaches often rely on abstract, complex visualizations that poorly map to the task at hand or require non-trivial ML expertise to interpret. Here, we present two interface modules that facilitate intuitively assessing model reliability. To help users better characterize and reason about a model's uncertainty, we visualize raw and aggregate information about a given input's nearest neighbors. Using an interactive editor, users can manipulate this input in semantically-meaningful ways, determine the effect on the output, and compare against their prior expectations. We evaluate our approach using an electrocardiogram beat classification case study. Compared to a baseline feature importance interface, we find that 14 physicians are better able to align the model's uncertainty with domain-relevant factors and build intuition about its capabilities and limitations.

Cite

CITATION STYLE

APA

Suresh, H., Lewis, K. M., Guttag, J., & Satyanarayan, A. (2022). Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs. In International Conference on Intelligent User Interfaces, Proceedings IUI (pp. 767–781). Association for Computing Machinery. https://doi.org/10.1145/3490099.3511160

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free