One of the major criticisms of Artificial Intelligence is its lack of explainability. A claim is made by many critics that without knowing how an AI may derive a result or come to a given conclusion, it is impossible to trust in its outcomes. This problem is especially concerning when AI-based systems and applications fail to perform their tasks successfully. In this Special Issue Editorial, we focus on two main areas, explainable AI (XAI) and accuracy, and how both dimensions are critical to building trustworthy systems. We review prominent XAI design themes, leading to a reframing of the design and development effort that highlights the significance of the human, thereby demonstrating the importance of human-centered AI (HCAI). The HCAI approach advocates for a range of deliberate design-related decisions, such as those pertaining to multi-stakeholder engagement and the dissolving of disciplinary boundaries. This enables the consideration and integration of deep interdisciplinary knowledge, as evidenced in our example of social cognitive approaches to AI design. This Editorial then presents a discussion on ways forward, underscoring the value of a balanced approach to assessing the opportunities, risks and responsibilities associated with AI design. We conclude by presenting papers in the Special Issue and their contribution, pointing to future research endeavors.
CITATION STYLE
Schoenherr, J. R., Abbas, R., Michael, K., Rivas, P., & Anderson, T. D. (2023). Designing AI Using a Human-Centered Approach: Explainability and Accuracy Toward Trustworthiness. IEEE Transactions on Technology and Society, 4(1), 9–23. https://doi.org/10.1109/tts.2023.3257627
Mendeley helps you to discover research relevant for your work.