Creating a User Model to Support User-specific Explanations of AI Systems

6Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we present a framework that supports providing user-specific explanations of AI systems. This is achieved by proposing a particular approach for modeling a user which enables a decision procedure to reason about how much detail to provide in an explanation. We also clarify the circumstances under which it is best not to provide an explanation at all, as one novel aspect of our design. While transparency of black box AI systems is an important aim for ethical AI, efforts to date are often one-size-fits-all. Our position is that more attention should be paid towards offering explanations that are context-specific, and our model takes an important step forward towards achieving that aim.

Author supplied keywords

Cite

CITATION STYLE

APA

Chambers, O., Cohen, R., Grossman, M. R., & Chen, Q. (2022). Creating a User Model to Support User-specific Explanations of AI Systems. In UMAP2022 - Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization (pp. 163–166). Association for Computing Machinery, Inc. https://doi.org/10.1145/3511047.3537678

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free