An experimental evaluation of automatically generated multiple choice questions from ontologies

6Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In order to provide support for the construction of MCQs, there have been recent efforts to generate MCQs with controlled difficulty from OWL ontologies. Preliminary evaluation suggests that automatically generated questions are not field ready yet and highlight the need for further evaluations. In this study, we have presented an extensive evaluation of automatically generated MCQs. We found that even questions that adhere to guidelines are subject to the clustering of distractors. Hence, the clustering of distractors must be realised as this could affect the prediction of difficulty.

Cite

CITATION STYLE

APA

Kurdi, G., Parsia, B., & Sattler, U. (2017). An experimental evaluation of automatically generated multiple choice questions from ontologies. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10161 LNCS, pp. 24–39). Springer Verlag. https://doi.org/10.1007/978-3-319-54627-8_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free