Asking Questions Like Educational Experts: Automatically Generating Question-Answer Pairs on Real-World Examination Data

15Citations
Citations of this article
75Readers
Mendeley users who have this article in their library.

Abstract

Generating high quality question-answer pairs is a hard but meaningful task. Although previous works have achieved great results on answer-aware question generation, it is difficult to apply them into practical application in the education field. This paper for the first time addresses the question-answer pair generation task on the real-world examination data, and proposes a new unified framework on RACE. To capture the important information of the input passage we first automatically generate (rather than extracting) keyphrases, thus this task is reduced to keyphrase-question-answer triplet joint generation. Accordingly, we propose a multi-agent communication model to generate and optimize the question and keyphrases iteratively, and then apply the generated question and keyphrases to guide the generation of answers. To establish a solid benchmark, we build our model on the strong generative pre-training model. Experimental results show that our model makes great breakthroughs in the question-answer pair generation task. Moreover, we make a comprehensive analysis on our model, suggesting new directions for this challenging task.

Cite

CITATION STYLE

APA

Qu, F., Jia, X., & Wu, Y. (2021). Asking Questions Like Educational Experts: Automatically Generating Question-Answer Pairs on Real-World Examination Data. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2583–2593). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.202

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free