Abstract
While internalized “implicit knowledge” in pretrained transformers has led to fruitful progress in many natural language understanding tasks, how to most effectively elicit such knowledge remains an open question. Based on the text-to-text transfer transformer (T5) model, this work explores a template-based approach to extract implicit knowledge for commonsense reasoning on multiple-choice (MC) question answering tasks. Experiments on three representative MC datasets show the surprisingly good performance of our simple template, coupled with a logit normalization technique for disambiguation. Furthermore, we verify that our proposed template can be easily extended to other MC tasks with contexts such as supporting facts in open-book question answering settings. Starting from the MC task, this work initiates further research to find generic natural language templates that can effectively leverage stored knowledge in pretrained models.
Cite
CITATION STYLE
Lin, S. C., Yang, J. H., Nogueira, R., Tsai, M. F., Wang, C. J., & Lin, J. (2020). Designing Templates for Eliciting Commonsense Knowledge from Pretrained Sequence-to-Sequence Models. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 3449–3453). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.307
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.