Identifying Usability Challenges in AI-Based Essay Grading Tools

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Automated Essay Scoring (AES) efforts have recently made it possible for platforms to provide real-time feedback and grades for student essays. With the growing importance of addressing usability issues that arise from integrating artificial intelligence (AI) into educational-based platforms, there have been significant efforts to improve the visual elements of User Interfaces (UI) for these types of platforms. However, little research has been done on how AI explainability and algorithm transparency affect the usability of AES platforms. To address this gap, a qualitative study was conducted using an AI-driven essay writing and grading platform. The study involved participants of students and instructors, and utilized surveys, semi-structured interviews, and a focus group to collect data on users’ experiences and perspectives. Results show that user understanding of the system, quality of feedback, error handling, and creating trust are the main usability concerns related to explainability and transparency. Understanding these challenges can help guide the development of effective grading tools that prioritize explainability and transparency, ultimately improving their usability.

Cite

CITATION STYLE

APA

Hall, E., Seyam, M., & Dunlap, D. (2023). Identifying Usability Challenges in AI-Based Essay Grading Tools. In Communications in Computer and Information Science (Vol. 1831 CCIS, pp. 675–680). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-36336-8_104

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free