The possibilities and limits of XAI in education: a socio-technical perspective

50Citations
Citations of this article
147Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Explicable AI in education (XAIED) has been proposed as a way to improve trust and ethical practice in algorithmic education. Based on a critical review of the literature, this paper argues that XAI should be understood as part of a wider socio-technical turn in AI. The socio-technical perspective indicates that explicability is a relative term. Consequently, XAIED mediation strategies developed and implemented across education stakeholder communities using language that is not just ‘explicable’ from an expert or technical standpoint, but explainable and interpretable to a range of stakeholders including learners. The discussion considers the impact of XAIED on several educational stakeholder types in light of the transparency of algorithms and the approach taken to explaination. Problematising the propositions of XAIED shows that XAI is not a full solution to the issues raised by AI, but a beginning and necessary precondition for meaningful discourse about possible futures.

Cite

CITATION STYLE

APA

Farrow, R. (2023). The possibilities and limits of XAI in education: a socio-technical perspective. Learning, Media and Technology, 48(2), 266–279. https://doi.org/10.1080/17439884.2023.2185630

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free