Detecting Cheating in Online Take-Home Exams with Randomized Questions

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The last three years were a significant challenge for educational institutions, due to the loss of face-to-face instruction and exam proctoring. Many instructors turned to asynchronous, online exams as a replacement for standard pen-and-paper exams. It is no surprise that many tools aimed at delivering computer-based assessments have become popular and are centers of research and development. This poster discusses our attempt to build a post-exam cheating detection system for the PrairieLearn open-source platform that supports randomized question generators, to uncover irregularities in submissions. Our system compares all pairs of students using four rules: Times (did students take the exam synchronously), Answers (did they have similar wrong answers), Orders (did they answer the questions in the same order), and Scores (did they achieve the same scores). It adds one final individual rule, the Score-Time-Ratio, that measures how many "points per minute"a student has earned, to flag students who open the exam, copy in a perfect answer, and submit. We deliver a detailed report to the instructor, allowing them to sort their students based on these measures, providing a data-driven way for them to investigate.

Cite

CITATION STYLE

APA

Xiao, R., Huerta-Mercado, E., & Garcia, D. (2023). Detecting Cheating in Online Take-Home Exams with Randomized Questions. In SIGCSE 2023 - Proceedings of the 54th ACM Technical Symposium on Computer Science Education (Vol. 2, p. 1323). Association for Computing Machinery, Inc. https://doi.org/10.1145/3545947.3576270

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free