MixQG: Neural Question Generation with Mixed Answer Types

31Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

Asking good questions is an essential ability for both human and machine intelligence. However, existing neural question generation approaches mainly focus on short factoid type of answers. In this paper, we introduce a neural question generator, MixQG, to bridge this gap. We combine nine question answering datasets with diverse answer types, including yes/no, multiple-choice, extractive, and abstractive answers, to train a single generative model. We show with empirical results that our model outperforms existing work in both seen and unseen domains, and can generate questions with different cognitive levels when conditioned on different answer types. We run a human evaluation study to assess the quality of generated questions and find that MixQG outperforms the next best model by 10%. Our code and model checkpoints will be released and integrated with the HuggingFace library to facilitate various downstream applications.

Cite

CITATION STYLE

APA

Murakhovs’ka, L., Wu, C. S., Laban, P., Niu, T., Liu, W., & Xiong, C. (2022). MixQG: Neural Question Generation with Mixed Answer Types. In Findings of the Association for Computational Linguistics: NAACL 2022 - Findings (pp. 1486–1497). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-naacl.111

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free