Abstract
Explainable Artificial Intelligence (XAI) seeks to render Artificial Intelligence (AI) models transparent and comprehensible, potentially increasing trust and confidence in AI recommendations. This research explores the realm of XAI within unsupervised educational machine learning, a relatively under-explored topic within Learning Analytics (LA). It introduces an XAI framework designed to elucidate clustering-based personalized recommendations for educators. Our approach involves a two-step validation: computational verification followed by domain-specific evaluation concerning its impact on teachers’ AI acceptance. Through interviews with K-12 educators, we identified key themes in teachers’ attitudes toward the explanations. The main contribution of this paper is a new XAI scheme for unsupervised educational machine-learning decision-support systems. The second is shedding light on the subjective nature of educators’ interpretation of XAI schemes and visualizations.
Author supplied keywords
Cite
CITATION STYLE
Feldman-Maggor, Y., Nazaretsky, T., & Alexandron, G. (2024). Explainable AI for Unsupervised Machine Learning: A Proposed Scheme Applied to a Case Study with Science Teachers. In International Conference on Computer Supported Education, CSEDU - Proceedings (Vol. 1, pp. 436–444). Science and Technology Publications, Lda. https://doi.org/10.5220/0012687000003693
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.