Abstract
There has been a sharp rise in research activities on explainable artificial intelligence (XAI), especially in the context of machine learning (ML). However, there has been less progress in developing and implementing XAI techniques in AI-enabled environments involving non-expert stakeholders. This paper reports our investigations into providing explanations on the outcomes of ML algorithms to non-experts. We investigate the use of three explanation approaches (global, local, and counterfactual), considering decision trees as a use case ML model. We demonstrate the approaches with a sample dataset, and provide empirical results from a study involving over 200 participants. Our results show that most participants have a good understanding of the generated explanations.
Author supplied keywords
Cite
CITATION STYLE
Zhang, Y., McAreavey, K., & Liu, W. (2022). Developing and Experimenting on Approaches to Explainability in AI Systems. In International Conference on Agents and Artificial Intelligence (Vol. 2, pp. 518–527). Science and Technology Publications, Lda. https://doi.org/10.5220/0010900300003116
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.