Unsupervised learning of question difficulty levels using assessment responses

5Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Question Difficulty Level is an important factor in determining assessment outcome. Accurate mapping of the difficulty levels in question banks offers a wide range of benefits apart from higher assessment quality: improved personalized learning, adaptive testing, automated question generation, and cheating detection. Adopting unsupervised machine learning techniques, we propose an efficient method derived from assessment responses to enhance consistency and accuracy in the assignment of question difficulty levels. We show effective feature extraction is achieved by partitioning test takers based on their test-scores. We validate our model using a large dataset collected from a two thousand student university-level proctored assessment. Preliminary results show our model is effective, achieving mean accuracy of 84% using instructor validation. We also show the model’s effectiveness in flagging mis-calibrated questions. Our approach can easily be adapted for a wide range of applications in e-learning and e-assessments.

Cite

CITATION STYLE

APA

Narayanan, S., Kommuri, V. S., Subramanian, N. S., Bijlani, K., & Nair, N. C. (2017). Unsupervised learning of question difficulty levels using assessment responses. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10404, pp. 543–552). Springer Verlag. https://doi.org/10.1007/978-3-319-62392-4_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free