G-Asks: An Intelligent Automatic Question Generation System for Academic Writing Support

  • Liu M
  • Calvo R
  • Rus V
N/ACitations
Citations of this article
80Readers
Mendeley users who have this article in their library.

Abstract

Many electronic feedback systems have been proposed for writing support. However, most of these systems only aim at supporting writing to communicate instead of writing to learn, as in the case of literature review writing. Trigger questions are potentially forms of support for writing to learn, but current automatic question generation approaches focus on factual question generation for reading comprehension or vocabulary assessment. This article presents a novel Automatic Question Generation (AQG) system, called G-Asks, which generates specific trigger questions as a form of support for students' learning through writing. We conducted a large-scale case study, including 24 human supervisors and 33 research students, in an Engineering Research Method course at The University of Sydney and compared questions generated by G-Asks with human generated question. The results indicate that G-Asks can generate questions as useful as human supervisors (`useful' is one of five question quality measures) while significantly outperforming Human Peer and Generic Questions in most quality measures after filtering out questions with grammatical and semantic errors. Furthermore, we identified the most frequent question types, derived from the human supervisors' questions and discussed how the human supervisors generate such questions from the source text.

Cite

CITATION STYLE

APA

Liu, M., Calvo, R. A., & Rus, V. (2012). G-Asks: An Intelligent Automatic Question Generation System for Academic Writing Support. Dialogue & Discourse, 3(2), 101–124. https://doi.org/10.5087/dad.2012.205

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free