Serendipitous Gains of Explaining a Classifier - Artificial versus Human Performance and Annotator Support in an Urgent Instructor-Intervention Model for MOOCs

0Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Determining when instructor intervention is needed, based on learners' comments and their urgency in massive open online course (MOOC) environments, is a known challenge. To solve this challenge, prior art used autonomous machine learning (ML) models. These models are described as having a "black-box"nature, and their output is incomprehensible to humans. This paper shows how to apply eXplainable Artificial Intelligence (XAI) techniques to interpret a MOOC intervention model for urgent comments detection. As comments were selected from the MOOC course and annotated using human experts, we additionally study the confidence between annotators (annotator agreement confidence), versus an estimate of the class score of making a decision via ML, to support intervention decision. Serendipitously, we show, for the first time, that XAI can be further used to support annotators creating high-quality, gold standard datasets for urgent intervention.

Cite

CITATION STYLE

APA

Alrajhi, L., Pereira, F. D., Cristea, A. I., & Alamri, A. (2023). Serendipitous Gains of Explaining a Classifier - Artificial versus Human Performance and Annotator Support in an Urgent Instructor-Intervention Model for MOOCs. In Proceedings of the 6th Workshop on Human Factors in Hypertext, HUMAN 2023 - Associated to the ACM Conference on Hypertext and Social Media 2023, HT 2023. Association for Computing Machinery, Inc. https://doi.org/10.1145/3603607.3613480

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free