Evaluating multiple choice question generator

3Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Semantic-based computer-assisted automated question generator has been increasingly popular as a tool for creating personalized assessment questions.Various question generator tools have been proposed, such as those which generate structured question from a text file, generate Multiple Choice Question (MCQ) from a text file or from ontology-based knowledge representation. A comparison framework and evaluation methodology is required to evaluate different question generator tools. This paper discusses the requirement and criteria to carry out performance comparison of different MCQ generator. A feature comparison of Question Generation (QG) tool, namely Mimos-QG with the existing QG tools is presented. We have evaluated our QG tool based on several standard criteria such as the correctness of (a) distractor generation, (b) answer choice grouping strategy and (c) syntactical and pedagogical quality on three different domain ontologies. The experimental result indicated that Mimos-QG is capable of producing good quality direct type and grouping type multiple choice questions. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Tan, S. Y., Kiu, C. C., & Lukose, D. (2012). Evaluating multiple choice question generator. In Communications in Computer and Information Science (Vol. 295 CCIS, pp. 283–292). https://doi.org/10.1007/978-3-642-32826-8_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free