Distractor quality evaluation in multiple choice questions

5Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multiple choice questions represent a widely used evaluation mode; yet writing items that properly evaluate student learning is a complex task. Guidelines were developed for manual item creation, but automatic item quality evaluation would constitute a helpful tool for teachers. In this paper, we present a method for evaluating distractor (i.e. incorrect option) quality that combines syntactic and semantic homogeneity criteria, based on Natural Language Processing methods. We perform an evaluation of this method on a large MCQ corpus and show that the combination of several measures enables us to validate distractors.

Cite

CITATION STYLE

APA

Pho, V. M., Ligozat, A. L., & Grau, B. (2015). Distractor quality evaluation in multiple choice questions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9112, pp. 377–386). Springer Verlag. https://doi.org/10.1007/978-3-319-19773-9_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free