Assessing Collaborative Explanations of AI using Explanation Goodness Criteria

3Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Explainable AI represents an increasingly important category of systems that attempt to support human understanding and trust in machine intelligence and automation. Typical systems rely on algorithms to help understand underlying information about decisions and establish justified trust and reliance. Researchers have proposed using goodness criteria to measure the quality of explanations as a formative evaluation of an XAI system, but these criteria have not been systematically investigated in the literature. To explore this, we present a novel collaborative explanation system (CXAI) and propose several goodness criteria to evaluate the quality of its explanations. Results suggest that the explanations provided by this system are typically correct, informative, written in understandabl e ways, and focus on explanation of larger scale data patterns than are typically generated by algorithmic XAI systems. Implications for how these criteria may be applied to other XAI systems are discussed.

Cite

CITATION STYLE

APA

Mamun, T. I., Baker, K., Malinowski, H., Hoffman, R. R., & Mueller, S. T. (2021). Assessing Collaborative Explanations of AI using Explanation Goodness Criteria. In Proceedings of the Human Factors and Ergonomics Society (Vol. 65, pp. 988–993). SAGE Publications Inc. https://doi.org/10.1177/1071181321651307

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free