LIMEADE: From AI Explanations to Advice Taking

5Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

Abstract

Research in human-centered AI has shown the benefits of systems that can explain their predictions. Methods that allow AI to take advice from humans in response to explanations are similarly useful. While both capabilities are well developed for transparent learning models (e.g., linear models and GA2Ms) and recent techniques (e.g., LIME and SHAP) can generate explanations for opaque models, little attention has been given to advice methods for opaque models. This article introduces LIMEADE, the first general framework that translates both positive and negative advice (expressed using high-level vocabulary such as that employed by post hoc explanations) into an update to an arbitrary, underlying opaque model. We demonstrate the generality of our approach with case studies on 70 real-world models across two broad domains: image classification and text recommendation. We show that our method improves accuracy compared to a rigorous baseline on the image classification domains. For the text modality, we apply our framework to a neural recommender system for scientific papers on a public website; our user study shows that our framework leads to significantly higher perceived user control, trust, and satisfaction.

Cite

CITATION STYLE

APA

Lee, B. C. G., Downey, D., Lo, K., & Weld, D. S. (2023). LIMEADE: From AI Explanations to Advice Taking. ACM Transactions on Interactive Intelligent Systems, 13(4). https://doi.org/10.1145/3589345

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free