In this paper, we present a framework that supports providing user-specific explanations of AI systems. This is achieved by proposing a particular approach for modeling a user which enables a decision procedure to reason about how much detail to provide in an explanation. We also clarify the circumstances under which it is best not to provide an explanation at all, as one novel aspect of our design. While transparency of black box AI systems is an important aim for ethical AI, efforts to date are often one-size-fits-all. Our position is that more attention should be paid towards offering explanations that are context-specific, and our model takes an important step forward towards achieving that aim.
CITATION STYLE
Chambers, O., Cohen, R., Grossman, M. R., & Chen, Q. (2022). Creating a User Model to Support User-specific Explanations of AI Systems. In UMAP2022 - Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization (pp. 163–166). Association for Computing Machinery, Inc. https://doi.org/10.1145/3511047.3537678
Mendeley helps you to discover research relevant for your work.