Evaluating human and automated generation of distractors for diagnostic multiple-choice cloze questions to assess children’s reading comprehension

6Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We report an experiment to evaluate DQGen’s performance in generating three types of distractors for diagnostic multiple-choice cloze (fill-in-the-blank) questions to assess children’s reading comprehension processes. Ungrammatical distractors test syntax, nonsensical distractors test semantics, and locally plausible distractors test inter-sentential processing. 27 knowledgeable humans rated candidate answers as correct, plausible, nonsensical, or ungrammatical without knowing their intended type or whether they were generated by DQGen, written by other humans, or correct. Surprisingly, DQGen did significantly better than humans at generating ungrammatical distractors and slightly better than them at generating nonsensical distractors, albeit worse at generating plausible distractors. Vetting its output and writing distractors only when necessary would take half as long as writing them all, and improve their quality.

Cite

CITATION STYLE

APA

Huang, Y. T., & Mostow, J. (2015). Evaluating human and automated generation of distractors for diagnostic multiple-choice cloze questions to assess children’s reading comprehension. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9112, pp. 155–164). Springer Verlag. https://doi.org/10.1007/978-3-319-19773-9_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free