SWAGex at SemEval-2020 Task 4: Commonsense Explanation as Next Event Prediction

1Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.

Abstract

We describe the system submitted by the SWAGex team to the SemEval-2020 Commonsense Validation and Explanation Task. We used multiple methods on the pre-trained language model BERT (Devlin et al., 2018) for tasks that require the system to recognize sentences against commonsense and justify the reasoning behind this decision. Our best performing model is BERT trained on SWAG fine-tuned for the task. We investigate the ability to transfer commonsense knowledge from SWAG to SemEval-2020 by training a model for the Explanation task with Next Event Prediction data.

Cite

CITATION STYLE

APA

Rim, W. B., & Okazaki, N. (2020). SWAGex at SemEval-2020 Task 4: Commonsense Explanation as Next Event Prediction. In 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings (pp. 422–429). International Committee for Computational Linguistics. https://doi.org/10.18653/v1/2020.semeval-1.51

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free