Multiple choice questions represent a widely used evaluation mode; yet writing items that properly evaluate student learning is a complex task. Guidelines were developed for manual item creation, but automatic item quality evaluation would constitute a helpful tool for teachers. In this paper, we present a method for evaluating distractor (i.e. incorrect option) quality that combines syntactic and semantic homogeneity criteria, based on Natural Language Processing methods. We perform an evaluation of this method on a large MCQ corpus and show that the combination of several measures enables us to validate distractors.
CITATION STYLE
Pho, V. M., Ligozat, A. L., & Grau, B. (2015). Distractor quality evaluation in multiple choice questions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9112, pp. 377–386). Springer Verlag. https://doi.org/10.1007/978-3-319-19773-9_38
Mendeley helps you to discover research relevant for your work.