In this study, we investigate the employment of the pre-trained BERT language model to tackle question generation tasks. We introduce two neural architectures built on top of BERT for question generation tasks. The first one is a straightforward BERT employment, which reveals the defects of directly using BERT for text generation. And, the second one remedies the first one by restructuring the BERT employment into a sequential manner for taking information from previous decoded results. Our models are trained and evaluated on the question-answering dataset SQuAD. Experiment results show that our best model yields state-of-the-art performance which advances the BLEU 4 score of existing best models from 16.85 to 21.04.
CITATION STYLE
Chan, Y. H., & Fan, Y. C. (2019). BERT for question generation. In INLG 2019 - 12th International Conference on Natural Language Generation, Proceedings of the Conference (pp. 173–177). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-8624
Mendeley helps you to discover research relevant for your work.