Abstract
Stories generated with neural language models have shown promise in grammatical and stylistic consistency. However, the generated stories are still lacking in common sense reasoning, e.g., they often contain sentences deprived of world knowledge. We propose a simple multi-task learning scheme to achieve quantitatively better common sense reasoning in language models by leveraging auxiliary training signals from datasets designed to provide common sense grounding. When combined with our two-stage fine-tuning pipeline, our method achieves improved common sense reasoning and state-of-the-art perplexity on the WritingPrompts (Fan et al., 2018) story generation dataset.
Cite
CITATION STYLE
Mao, H. H., Majumder, B. P., McAuley, J., & Cottrell, G. W. (2019). Improving neural story generation by targeted common sense grounding. In EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 5988–5993). Association for Computational Linguistics. https://doi.org/10.18653/v1/D19-1615
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.