Towards Trustworthy and Understandable AI: Unraveling Explainability Strategies on Simplifying Algorithms, Appropriate Information Disclosure, and High-level Collaboration

2Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

Human-centered artificial intelligence (AI) has garnered significant attention. Explainability strategies based on the concept of explainable AI (XAI) are comprehensive sets of techniques and principles that help users establish understandable and trustworthy AI systems. However, existing explainability strategies still face numerous challenges in enabling users to understand AI system decisions better. This literature review aims to explore how to overcome these challenges through simplified algorithms, appropriate information disclosure, and high-level collaboration, thereby offering future research direction for building AI systems that are trustworthy and understandable to users.

Cite

CITATION STYLE

APA

Yu, S. (2023). Towards Trustworthy and Understandable AI: Unraveling Explainability Strategies on Simplifying Algorithms, Appropriate Information Disclosure, and High-level Collaboration. In ACM International Conference Proceeding Series (pp. 133–143). Association for Computing Machinery. https://doi.org/10.1145/3616961.3616965

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free