Comparing Neural Question Generation Architectures for Reading Comprehension

4Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

In recent decades, there has been a significant push to leverage technology to aid both teachers and students in the classroom. Language processing advancements have been harnessed to provide better tutoring services, automated feedback to teachers, improved peer-to-peer feedback mechanisms, and measures of student comprehension for reading. Automated question generation systems have the potential to significantly reduce teachers’ workload in the latter. In this paper, we compare three different neural architectures for question generation across two types of reading material: narratives and textbooks. For each architecture, we explore the benefits of including question attributes in the input representation. Our models show that a T5 architecture has the best overall performance, with a RougeL score of 0.536 on a narrative corpus and 0.316 on a textbook corpus. We break down the results by attribute and discover that the attribute can improve the quality of some types of generated questions, including Action and Character, but this is not true for all models.

References Powered by Scopus

SQuad: 100,000+ questions for machine comprehension of text

4050Citations
N/AReaders
Get full text

How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation

844Citations
N/AReaders
Get full text

A Systematic Review of Automatic Question Generation for Educational Purposes

350Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Fine-Tuned T5 Transformer with LSTM and Spider Monkey Optimizer for Redundancy Reduction in Automatic Question Generation

1Citations
N/AReaders
Get full text

Automated Questions Answering Generation System Adopting NLP and T5

0Citations
N/AReaders
Get full text

Adaptive Question-Answer Generation With Difficulty Control Using Item Response Theory and Pretrained Transformer Models

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Perkoff, E. M., Bhattacharyya, A., Cai, J. Z., & Cao, J. (2023). Comparing Neural Question Generation Architectures for Reading Comprehension. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 556–566). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.bea-1.47

Readers over time

‘23‘24‘250481216

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 5

56%

Lecturer / Post doc 3

33%

Researcher 1

11%

Readers' Discipline

Tooltip

Computer Science 10

77%

Social Sciences 1

8%

Physics and Astronomy 1

8%

Medicine and Dentistry 1

8%

Save time finding and organizing research with Mendeley

Sign up for free
0