BERT for question generation

17Citations
Citations of this article
98Readers
Mendeley users who have this article in their library.

Abstract

In this study, we investigate the employment of the pre-trained BERT language model to tackle question generation tasks. We introduce two neural architectures built on top of BERT for question generation tasks. The first one is a straightforward BERT employment, which reveals the defects of directly using BERT for text generation. And, the second one remedies the first one by restructuring the BERT employment into a sequential manner for taking information from previous decoded results. Our models are trained and evaluated on the question-answering dataset SQuAD. Experiment results show that our best model yields state-of-the-art performance which advances the BLEU 4 score of existing best models from 16.85 to 21.04.

Cite

CITATION STYLE

APA

Chan, Y. H., & Fan, Y. C. (2019). BERT for question generation. In INLG 2019 - 12th International Conference on Natural Language Generation, Proceedings of the Conference (pp. 173–177). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-8624

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free