Comparing Neural Question Generation Architectures for Reading Comprehension

3Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

In recent decades, there has been a significant push to leverage technology to aid both teachers and students in the classroom. Language processing advancements have been harnessed to provide better tutoring services, automated feedback to teachers, improved peer-to-peer feedback mechanisms, and measures of student comprehension for reading. Automated question generation systems have the potential to significantly reduce teachers’ workload in the latter. In this paper, we compare three different neural architectures for question generation across two types of reading material: narratives and textbooks. For each architecture, we explore the benefits of including question attributes in the input representation. Our models show that a T5 architecture has the best overall performance, with a RougeL score of 0.536 on a narrative corpus and 0.316 on a textbook corpus. We break down the results by attribute and discover that the attribute can improve the quality of some types of generated questions, including Action and Character, but this is not true for all models.

Cite

CITATION STYLE

APA

Perkoff, E. M., Bhattacharyya, A., Cai, J. Z., & Cao, J. (2023). Comparing Neural Question Generation Architectures for Reading Comprehension. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 556–566). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.bea-1.47

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free