Interpretability methods aim to help users build trust in and understand the capabilities of machine learning models. However, existing approaches often rely on abstract, complex visualizations that poorly map to the task at hand or require non-trivial ML expertise to interpret. Here, we present two interface modules that facilitate intuitively assessing model reliability. To help users better characterize and reason about a model's uncertainty, we visualize raw and aggregate information about a given input's nearest neighbors. Using an interactive editor, users can manipulate this input in semantically-meaningful ways, determine the effect on the output, and compare against their prior expectations. We evaluate our approach using an electrocardiogram beat classification case study. Compared to a baseline feature importance interface, we find that 14 physicians are better able to align the model's uncertainty with domain-relevant factors and build intuition about its capabilities and limitations.
CITATION STYLE
Suresh, H., Lewis, K. M., Guttag, J., & Satyanarayan, A. (2022). Intuitively Assessing ML Model Reliability through Example-Based Explanations and Editing Model Inputs. In International Conference on Intelligent User Interfaces, Proceedings IUI (pp. 767–781). Association for Computing Machinery. https://doi.org/10.1145/3490099.3511160
Mendeley helps you to discover research relevant for your work.